From patchwork Tue Feb 13 09:37:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 13554786 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BFDBEC48260 for ; Tue, 13 Feb 2024 09:37:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 488518D0003; Tue, 13 Feb 2024 04:37:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4395C8D0001; Tue, 13 Feb 2024 04:37:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2B33C8D0003; Tue, 13 Feb 2024 04:37:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 16EDB8D0001 for ; Tue, 13 Feb 2024 04:37:42 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id DA09C1A052D for ; Tue, 13 Feb 2024 09:37:41 +0000 (UTC) X-FDA: 81786278322.15.174D5E7 Received: from mout-p-102.mailbox.org (mout-p-102.mailbox.org [80.241.56.152]) by imf26.hostedemail.com (Postfix) with ESMTP id 3817D140003 for ; Tue, 13 Feb 2024 09:37:39 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=q+QdcRp9; dmarc=none; spf=pass (imf26.hostedemail.com: domain of me@pankajraghav.com designates 80.241.56.152 as permitted sender) smtp.mailfrom=me@pankajraghav.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707817060; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=G2fO/D/MPNOVLZ/qHSRb69SYJGOgunRGzbPIdUcBFeY=; b=Lxd8l7dro5gciKyBhDDTOSXAfBVhNfJRh4LO7IBm+vSo7DtW9TQ5LCjCoP+CWrIKS4f/EB StiwNfQwl5lmpK2djElvNR1iADMEaQW/GZTcVAtavs3/yadew2YRBl9Rz/+JtOe8uh6utm BdElM9E5ZuQMW/NezYnqx3LIgeIZ57k= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=q+QdcRp9; dmarc=none; spf=pass (imf26.hostedemail.com: domain of me@pankajraghav.com designates 80.241.56.152 as permitted sender) smtp.mailfrom=me@pankajraghav.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707817060; a=rsa-sha256; cv=none; b=f/2KtrpYjtMoUVv/conZmta/+KhoiaD+uTFESKOCViXkhlFADyJQE8S4CwlcKSvn1DpEbx IYdJfJDPxA9/xEx7obqpe4eWZHkTWdwvlOKHIDopEL2CV6ehAYyWuRb4k5cwX8xtRRdPbh 5G/8PjBykdfRqAlGsg7FETXgNMy/u9g= Received: from smtp2.mailbox.org (smtp2.mailbox.org [IPv6:2001:67c:2050:b231:465::2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-102.mailbox.org (Postfix) with ESMTPS id 4TYx7h610nz9sp2; Tue, 13 Feb 2024 10:37:36 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1707817056; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=G2fO/D/MPNOVLZ/qHSRb69SYJGOgunRGzbPIdUcBFeY=; b=q+QdcRp9VMFfzBuHWmwG0y4c6GTja5nXRvH9aAJ/+ewY7aP/tIt9Dp0+UfOMBoqce1hD6r 4Jr9W7xXq0c5XSbK3Y2hbZrDNo/2DpL4R/2wGRvMP0RY1PSm+kdlNNZopEmK/dOkqMb5CQ QTAmRx5ZdiGchLWnc+Dw0wyZ6d5eD6QTsN5x23PoMUnXNKkk0pbacHYf/5S1hQFRYemGuR 9i+kZrTl/z6yLaQCs78brNS7z1wN+GvZgsWUiCXs3qO0ElouvrUdzXa0IKodiwYR5QLZ59 gBxcaEjI6JuH8GEXz/8F7YnQUXYe9S3rD1b7DEMAcw07PdMjtyAMdrCImC2L8g== From: "Pankaj Raghav (Samsung)" To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: mcgrof@kernel.org, gost.dev@samsung.com, akpm@linux-foundation.org, kbusch@kernel.org, djwong@kernel.org, chandan.babu@oracle.com, p.raghav@samsung.com, linux-kernel@vger.kernel.org, hare@suse.de, willy@infradead.org, linux-mm@kvack.org, david@fromorbit.com Subject: [RFC v2 05/14] readahead: align index to mapping_min_order in ondemand_ra and force_ra Date: Tue, 13 Feb 2024 10:37:04 +0100 Message-ID: <20240213093713.1753368-6-kernel@pankajraghav.com> In-Reply-To: <20240213093713.1753368-1-kernel@pankajraghav.com> References: <20240213093713.1753368-1-kernel@pankajraghav.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 3817D140003 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: itkw9hphf6iieae6dibbfrb6sediarit X-HE-Tag: 1707817059-591079 X-HE-Meta: U2FsdGVkX1+3OO3I7RNNJtOxCSIMvAfbCIana4QdZWHDs140r6dZ7Ut41hbgDc76hjkcSsA5UFfq+P+EKCY23nEfDeFFd9ZdOY7Atm/0ZFvAlupFd0gJb1REbwwsx3cYdbsNqu3hb/ymDUh/FvAVNbyddqVRmJGOtXzgG0uMhv1BUUdJWb/B5HN+cCXnU6Xv67TDPQk4h1GhjTg53uMPT9/tYfPcBTPquLUZJUVCyRSh4dTlOiiekjOLij18RTjcMxcyr2AETLZiPrUFHcR2/sUwKoDCmZPzseJYSWNr94j4mfY2wZaxxz7Mgq7Tl2SYSREQ5AOlNscEDCUNZTHBRLl/sKFLYqYXw+qK672TwEGrRnlCzv+g7aPafl16FMBGas+f63ufGQlO18bY7UYRRg+5g7PuYuu23L5Rfa84TgNoeNJRKD9Sx0lBk+3QzOUcln8BhXUl7pIvwFO0aRaGWOKKiFCidUi7a3s9HcFSsJpmYlBk9hQYnfuGV2CNm9nopOszltsa306OfpUim+ZojyoNy8VEptnxZ/aVUrBP4zjsGZxUSfWF/ifxs2Az5YUvdLo6VT0kForQUNtsmiz+XTJbi329qvE/rnmBHeo8B/1lmR5Wvn2yc2n3CH3fmxF3fZ/mXpTE9FvrDuWmfNXPKhBaxkf9Lr/Y0vlS/7mJAWoz+pBYsu39jrk5oepZcFfLd9J5fhptIyy8/F1XPElq9zyQ7L3KBe9oyKnL+7heg8bUH277ccQ2quGnQpQLf5sVl9EmZF5kOubj4jLqzH4VR854zzTazMZzECLlJifteWq8eGpaOrJcq8bcO5mLSSOkr2KRYYcpEuw75wZ7Ley5oaIof+5Tnsr/WN3eALkV/PXxdi+0zARdj1np53BYrlqRSFOSNuRHoxW/RNJQuPcX08yOAYN0NVC8o4U29zgpjn3H5NZCugoh0mpyfH+6r0EjLRPYYYsy1v6fw3d98CQ iXl4KRtE uTzwYVUlFmhLYyH+dc39QKUQt2n6IMhT8CS2RVe4/cVu+obwLA+z/Za2BNzVldojtESh7S7TxxYDq1hTn/wDdSFzCF/Bu64U7Si/RUorXWZ9Kito= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Luis Chamberlain Align the ra->start and ra->size to mapping_min_order in ondemand_readahead(), and align the index to mapping_min_order in force_page_cache_ra(). This will ensure that the folios allocated for readahead that are added to the page cache are aligned to mapping_min_order. Signed-off-by: Luis Chamberlain Signed-off-by: Pankaj Raghav Acked-by: Darrick J. Wong --- mm/readahead.c | 48 ++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 40 insertions(+), 8 deletions(-) diff --git a/mm/readahead.c b/mm/readahead.c index 4fa7d0e65706..5e1ec7705c78 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -315,6 +315,7 @@ void force_page_cache_ra(struct readahead_control *ractl, struct file_ra_state *ra = ractl->ra; struct backing_dev_info *bdi = inode_to_bdi(mapping->host); unsigned long max_pages, index; + unsigned int min_nrpages = mapping_min_folio_nrpages(mapping); if (unlikely(!mapping->a_ops->read_folio && !mapping->a_ops->readahead)) return; @@ -324,6 +325,13 @@ void force_page_cache_ra(struct readahead_control *ractl, * be up to the optimal hardware IO size */ index = readahead_index(ractl); + if (!IS_ALIGNED(index, min_nrpages)) { + unsigned long old_index = index; + + index = round_down(index, min_nrpages); + nr_to_read += (old_index - index); + } + max_pages = max_t(unsigned long, bdi->io_pages, ra->ra_pages); nr_to_read = min_t(unsigned long, nr_to_read, max_pages); while (nr_to_read) { @@ -332,6 +340,7 @@ void force_page_cache_ra(struct readahead_control *ractl, if (this_chunk > nr_to_read) this_chunk = nr_to_read; ractl->_index = index; + VM_BUG_ON(!IS_ALIGNED(index, min_nrpages)); do_page_cache_ra(ractl, this_chunk, 0); index += this_chunk; @@ -344,11 +353,20 @@ void force_page_cache_ra(struct readahead_control *ractl, * for small size, x 4 for medium, and x 2 for large * for 128k (32 page) max ra * 1-2 page = 16k, 3-4 page 32k, 5-8 page = 64k, > 8 page = 128k initial + * + * For higher order address space requirements we ensure no initial reads + * are ever less than the min number of pages required. + * + * We *always* cap the max io size allowed by the device. */ -static unsigned long get_init_ra_size(unsigned long size, unsigned long max) +static unsigned long get_init_ra_size(unsigned long size, + unsigned int min_nrpages, + unsigned long max) { unsigned long newsize = roundup_pow_of_two(size); + newsize = max_t(unsigned long, newsize, min_nrpages); + if (newsize <= max / 32) newsize = newsize * 4; else if (newsize <= max / 4) @@ -356,6 +374,8 @@ static unsigned long get_init_ra_size(unsigned long size, unsigned long max) else newsize = max; + VM_BUG_ON(newsize & (min_nrpages - 1)); + return newsize; } @@ -364,14 +384,16 @@ static unsigned long get_init_ra_size(unsigned long size, unsigned long max) * return it as the new window size. */ static unsigned long get_next_ra_size(struct file_ra_state *ra, + unsigned int min_nrpages, unsigned long max) { - unsigned long cur = ra->size; + unsigned long cur = max(ra->size, min_nrpages); if (cur < max / 16) return 4 * cur; if (cur <= max / 2) return 2 * cur; + return max; } @@ -561,7 +583,11 @@ static void ondemand_readahead(struct readahead_control *ractl, unsigned long add_pages; pgoff_t index = readahead_index(ractl); pgoff_t expected, prev_index; - unsigned int order = folio ? folio_order(folio) : 0; + unsigned int min_order = mapping_min_folio_order(ractl->mapping); + unsigned int min_nrpages = mapping_min_folio_nrpages(ractl->mapping); + unsigned int order = folio ? folio_order(folio) : min_order; + + VM_BUG_ON(!IS_ALIGNED(ractl->_index, min_nrpages)); /* * If the request exceeds the readahead window, allow the read to @@ -583,8 +609,8 @@ static void ondemand_readahead(struct readahead_control *ractl, expected = round_down(ra->start + ra->size - ra->async_size, 1UL << order); if (index == expected || index == (ra->start + ra->size)) { - ra->start += ra->size; - ra->size = get_next_ra_size(ra, max_pages); + ra->start += round_down(ra->size, min_nrpages); + ra->size = get_next_ra_size(ra, min_nrpages, max_pages); ra->async_size = ra->size; goto readit; } @@ -603,13 +629,18 @@ static void ondemand_readahead(struct readahead_control *ractl, max_pages); rcu_read_unlock(); + start = round_down(start, min_nrpages); + + VM_BUG_ON(folio->index & (folio_nr_pages(folio) - 1)); + if (!start || start - index > max_pages) return; ra->start = start; ra->size = start - index; /* old async_size */ + ra->size += req_size; - ra->size = get_next_ra_size(ra, max_pages); + ra->size = get_next_ra_size(ra, min_nrpages, max_pages); ra->async_size = ra->size; goto readit; } @@ -646,7 +677,7 @@ static void ondemand_readahead(struct readahead_control *ractl, initial_readahead: ra->start = index; - ra->size = get_init_ra_size(req_size, max_pages); + ra->size = get_init_ra_size(req_size, min_nrpages, max_pages); ra->async_size = ra->size > req_size ? ra->size - req_size : ra->size; readit: @@ -657,7 +688,7 @@ static void ondemand_readahead(struct readahead_control *ractl, * Take care of maximum IO pages as above. */ if (index == ra->start && ra->size == ra->async_size) { - add_pages = get_next_ra_size(ra, max_pages); + add_pages = get_next_ra_size(ra, min_nrpages, max_pages); if (ra->size + add_pages <= max_pages) { ra->async_size = add_pages; ra->size += add_pages; @@ -668,6 +699,7 @@ static void ondemand_readahead(struct readahead_control *ractl, } ractl->_index = ra->start; + VM_BUG_ON(!IS_ALIGNED(ractl->_index, min_nrpages)); page_cache_ra_order(ractl, ra, order); }