diff mbox series

[mm-hotfixes-unstable] shmem: fix smaps BUG sleeping while atomic

Message ID 6fe3b3ec-abdf-332f-5c23-6a3b3a3b11a9@google.com (mailing list archive)
State New
Headers show
Series [mm-hotfixes-unstable] shmem: fix smaps BUG sleeping while atomic | expand

Commit Message

Hugh Dickins Aug. 23, 2023, 5:14 a.m. UTC
smaps_pte_hole_lookup() is calling shmem_partial_swap_usage() with page
table lock held: but shmem_partial_swap_usage() does cond_resched_rcu()
if need_resched(): "BUG: sleeping function called from invalid context".

Since shmem_partial_swap_usage() is designed to count across a range, but
smaps_pte_hole_lookup() only calls it for a single page slot, just break
out of the loop on the last or only page, before checking need_resched().

Fixes: 230100321518 ("mm/smaps: simplify shmem handling of pte holes")
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: stable@vger.kernel.org # 5.16+
---
 mm/shmem.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

Comments

Peter Xu Aug. 23, 2023, 4:41 p.m. UTC | #1
On Tue, Aug 22, 2023 at 10:14:47PM -0700, Hugh Dickins wrote:
> smaps_pte_hole_lookup() is calling shmem_partial_swap_usage() with page
> table lock held: but shmem_partial_swap_usage() does cond_resched_rcu()
> if need_resched(): "BUG: sleeping function called from invalid context".
> 
> Since shmem_partial_swap_usage() is designed to count across a range, but
> smaps_pte_hole_lookup() only calls it for a single page slot, just break
> out of the loop on the last or only page, before checking need_resched().
> 
> Fixes: 230100321518 ("mm/smaps: simplify shmem handling of pte holes")
> Signed-off-by: Hugh Dickins <hughd@google.com>
> Cc: stable@vger.kernel.org # 5.16+

Oops.. thanks Hugh.

Acked-by: Peter Xu <peterx@redhat.com>
diff mbox series

Patch

diff --git a/mm/shmem.c b/mm/shmem.c
index 7a0c1e19d9f8..c512a5e82f8d 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -806,14 +806,16 @@  unsigned long shmem_partial_swap_usage(struct address_space *mapping,
 	XA_STATE(xas, &mapping->i_pages, start);
 	struct page *page;
 	unsigned long swapped = 0;
+	unsigned long max = end - 1;
 
 	rcu_read_lock();
-	xas_for_each(&xas, page, end - 1) {
+	xas_for_each(&xas, page, max) {
 		if (xas_retry(&xas, page))
 			continue;
 		if (xa_is_value(page))
 			swapped++;
-
+		if (xas.xa_index == max)
+			break;
 		if (need_resched()) {
 			xas_pause(&xas);
 			cond_resched_rcu();