From patchwork Tue Apr 9 19:22:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13623157 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 84FB8157A66 for ; Tue, 9 Apr 2024 19:24:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690653; cv=none; b=H47LaGvMVW6U739JEyLLWN4rfPP5lcEx3dgUwLqWqzKXf2jUynMyepeYz+TniSI3Vd4BI5CIkrYrjXr3cR4JF8usqzBXGd/XtqxZBLxUID6vigB9I7Tka4+vASS3LTtgy3NZb3cdLWLCRUmK6cSEYPgfpDkPBMBTFh2WO2j2Vds= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690653; c=relaxed/simple; bh=er1avreit/gXVooPI/0puJF4i4dYhIIHJ1VQBTIYsoU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=W9yTZV+znAKjpZvdOH6cIFDTtpgoj6DNU4hqkEV2m65OyycmI0r2clegSY50kDx8yx1ApoADTMf5HtVqdGwfdSa9WTST2pQgWJ7Qr0jsyuiIOQy6TEMeNysJB/wHeG3QMmcuR2eZ/IKZRMJjeXjRkufZcJqtRytJfMG01nlPaxE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=ZHSftwi9; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ZHSftwi9" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1712690649; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xg0rO/waZOgQmZDoxFOYF0uFGKk6lZboizK7nff+LmE=; b=ZHSftwi9tCSU7OBRmZHYJZjWPiIa/XFj1ekDyCpXL9raHOq9vjrbW6k1uAmRWtNEziwYj9 aCt6T7+JFAmv1WjzsON1vrPF3RB7THcPhOyjrHRhviz/BlGnJYhiio2JBLKhwGHVSBHdSB V6JjMlwXcKiKetLstEUBnBQcXI9Y5n4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-575-RAOsmv4mOp6g3iCo6gpvAQ-1; Tue, 09 Apr 2024 15:24:06 -0400 X-MC-Unique: RAOsmv4mOp6g3iCo6gpvAQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 24D511887313; Tue, 9 Apr 2024 19:23:48 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.192.106]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8CD4540AE783; Tue, 9 Apr 2024 19:23:32 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, cgroups@vger.kernel.org, linux-sh@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Peter Xu , Ryan Roberts , Yin Fengwei , Yang Shi , Zi Yan , Jonathan Corbet , Hugh Dickins , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , Chris Zankel , Max Filippov , Muchun Song , Miaohe Lin , Naoya Horiguchi , Richard Chang Subject: [PATCH v1 01/18] mm: allow for detecting underflows with page_mapcount() again Date: Tue, 9 Apr 2024 21:22:44 +0200 Message-ID: <20240409192301.907377-2-david@redhat.com> In-Reply-To: <20240409192301.907377-1-david@redhat.com> References: <20240409192301.907377-1-david@redhat.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 Commit 53277bcf126d ("mm: support page_mapcount() on page_has_type() pages") made it impossible to detect mapcount underflows by treating any negative raw mapcount value as a mapcount of 0. We perform such underflow checks in zap_present_folio_ptes() and zap_huge_pmd(), which would currently no longer trigger. Let's check against PAGE_MAPCOUNT_RESERVE instead by using page_type_has_type(), like page_has_type() would, so we can still catch some underflows. Fixes: 53277bcf126d ("mm: support page_mapcount() on page_has_type() pages") Signed-off-by: David Hildenbrand Signed-off-by: David Hildenbrand --- include/linux/mm.h | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index ef34cf54c14f..0fb8a40f82dd 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1229,11 +1229,10 @@ static inline void page_mapcount_reset(struct page *page) */ static inline int page_mapcount(struct page *page) { - int mapcount = atomic_read(&page->_mapcount) + 1; + int mapcount = atomic_read(&page->_mapcount); /* Handle page_has_type() pages */ - if (mapcount < 0) - mapcount = 0; + mapcount = page_type_has_type(mapcount) ? 0 : mapcount + 1; if (unlikely(PageCompound(page))) mapcount += folio_entire_mapcount(page_folio(page)); From patchwork Tue Apr 9 19:22:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13623156 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5E6EF156F5D for ; Tue, 9 Apr 2024 19:24:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690645; cv=none; b=YHX/THBsfYi/Al+1kvDWk43JoRrnK6s+puLBBMm+n3HApYlCWNy7qhkdalQ5h5fA/tktgDYwZUyN1qpvRW1pVAPQsPITDNhI/HYDqIk36lTpCqGEyYN4o7Mmm/WM14X+E/AbQtWziPGW5EdEpsP1Pz21ZqDq+Tt7oTysTbZOvlM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690645; c=relaxed/simple; bh=C6vsGAS9Kt2C12HcsVEZN58MdmMVsleriWzOb3OfiVs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=DaewxuLsP8fX0Ul6mMrGnHiFuDtoSPUSqXUTBbNEmZcPBSav1VZauYiJ5pB8X7WjcMF9vhMPEBMUprsv5f1sFbrYqBhl98eGXhI0DzE54oH5o+dxbfxAwguV7FCA07r/o399KaFNL9T7MqqT16X5Z4ynpwvZ98cV37D9sg3f0JM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=dstp3FQ1; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="dstp3FQ1" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1712690643; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=omUW48lRIosbAMPj8czTvBZpGDzoyoLuJmkkL5iPx/s=; b=dstp3FQ1sWNCEyXH2S4MQjcvVQKIWCf89tgb9LAZwvUrOjviv/U5D9mWn24o3Imvu1GcN8 Avs5671HatQvuURjXs2QUjvUifNeXCEyg5a7smW/MAKESV5SHc7h57XkVLe3TbhpeV9+lk Kde5NcbWk81gEv3KPTpa2gv0hZmgqg8= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-312-_0ZSgaEpNaK-itip1P_sGw-1; Tue, 09 Apr 2024 15:24:01 -0400 X-MC-Unique: _0ZSgaEpNaK-itip1P_sGw-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DE82338000A3; Tue, 9 Apr 2024 19:23:59 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.192.106]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5D8CB40AE78D; Tue, 9 Apr 2024 19:23:48 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, cgroups@vger.kernel.org, linux-sh@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Peter Xu , Ryan Roberts , Yin Fengwei , Yang Shi , Zi Yan , Jonathan Corbet , Hugh Dickins , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , Chris Zankel , Max Filippov , Muchun Song , Miaohe Lin , Naoya Horiguchi , Richard Chang Subject: [PATCH v1 02/18] mm/rmap: always inline anon/file rmap duplication of a single PTE Date: Tue, 9 Apr 2024 21:22:45 +0200 Message-ID: <20240409192301.907377-3-david@redhat.com> In-Reply-To: <20240409192301.907377-1-david@redhat.com> References: <20240409192301.907377-1-david@redhat.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 As we grow the code, the compiler might make stupid decisions and unnecessarily degrade fork() performance. Let's make sure to always inline functions that operate on a single PTE so the compiler will always optimize out the loop and avoid a function call. This is a preparation for maintining a total mapcount for large folios. Signed-off-by: David Hildenbrand Reviewed-by: Yin Fengwei --- include/linux/rmap.h | 17 +++++++++++++---- 1 file changed, 13 insertions(+), 4 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 9bf9324214fc..9549d78928bb 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -347,8 +347,12 @@ static inline void folio_dup_file_rmap_ptes(struct folio *folio, { __folio_dup_file_rmap(folio, page, nr_pages, RMAP_LEVEL_PTE); } -#define folio_dup_file_rmap_pte(folio, page) \ - folio_dup_file_rmap_ptes(folio, page, 1) + +static __always_inline void folio_dup_file_rmap_pte(struct folio *folio, + struct page *page) +{ + __folio_dup_file_rmap(folio, page, 1, RMAP_LEVEL_PTE); +} /** * folio_dup_file_rmap_pmd - duplicate a PMD mapping of a page range of a folio @@ -448,8 +452,13 @@ static inline int folio_try_dup_anon_rmap_ptes(struct folio *folio, return __folio_try_dup_anon_rmap(folio, page, nr_pages, src_vma, RMAP_LEVEL_PTE); } -#define folio_try_dup_anon_rmap_pte(folio, page, vma) \ - folio_try_dup_anon_rmap_ptes(folio, page, 1, vma) + +static __always_inline int folio_try_dup_anon_rmap_pte(struct folio *folio, + struct page *page, struct vm_area_struct *src_vma) +{ + return __folio_try_dup_anon_rmap(folio, page, 1, src_vma, + RMAP_LEVEL_PTE); +} /** * folio_try_dup_anon_rmap_pmd - try duplicating a PMD mapping of a page range From patchwork Tue Apr 9 19:22:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13623158 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 51198157E60 for ; Tue, 9 Apr 2024 19:24:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690654; cv=none; b=Beto+vSQuaGmZ/CP9ttZi5BXIg9AL2xOCuR5UeRKrEKCPHa1hfw59lLJ1yiniaJc/Ylz0qrWSZTM6ZZ5DmRiu2YBHBVSOenhXepkHGwp/q4Go92DhuT0a2n0zVu/hvG2CCFV5Cci+Rsh5YJuaKAsXeklx7GbLru5uxnOJoeGJfs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690654; c=relaxed/simple; bh=A67TbDz2qCguWtpXjavdA9CUCuXT3t/vnWxh8asQyzo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YClXwgmiTjjXgWlBKq6LB2qNsHHMvhCpmZOWomI9A77Nzu7fqFiWM+h9bQ5BRopumLKhG9OqEy0zsVOlbMTC7ClZHc/6g9hiUCWP7EfWWahGS8G3CGpU0RBgHRcxClAhpqAdSLp8O7rdo2o6Zjgl2LXwI3XvMTS++iju18mmxjQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=IsM27sPf; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="IsM27sPf" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1712690652; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DanfdMkK2y8X/5UQWi5wxydKuxMzBBZcoNekt2NzNQo=; b=IsM27sPf8gQiP6NDMLiNYeGmmIxoB5Qd5LSPV5itCR3qxnNesKaWH+2ljs24zWQU2Oo/ue jGEa+0t8/LAWlNT4enPgkqnNu8Q0QOygKOZ1hrPP4xlMUlXdArcVeCtgu6cnhq3sN2/anH aulCHMU5TS0xjKGk+yYExFqUkc//UAM= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-140-eJSj8LVjMUGt1dEE8jgGSA-1; Tue, 09 Apr 2024 15:24:08 -0400 X-MC-Unique: eJSj8LVjMUGt1dEE8jgGSA-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E743E806604; Tue, 9 Apr 2024 19:24:06 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.192.106]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3CAAC40B4982; Tue, 9 Apr 2024 19:24:00 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, cgroups@vger.kernel.org, linux-sh@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Peter Xu , Ryan Roberts , Yin Fengwei , Yang Shi , Zi Yan , Jonathan Corbet , Hugh Dickins , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , Chris Zankel , Max Filippov , Muchun Song , Miaohe Lin , Naoya Horiguchi , Richard Chang Subject: [PATCH v1 03/18] mm/rmap: add fast-path for small folios when adding/removing/duplicating Date: Tue, 9 Apr 2024 21:22:46 +0200 Message-ID: <20240409192301.907377-4-david@redhat.com> In-Reply-To: <20240409192301.907377-1-david@redhat.com> References: <20240409192301.907377-1-david@redhat.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 Let's add a fast-path for small folios to all relevant rmap functions. Note that only RMAP_LEVEL_PTE applies. This is a preparation for tracking the mapcount of large folios in a single value. Signed-off-by: David Hildenbrand Reviewed-by: Yin Fengwei --- include/linux/rmap.h | 13 +++++++++++++ mm/rmap.c | 26 ++++++++++++++++---------- 2 files changed, 29 insertions(+), 10 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 9549d78928bb..327f1ca5a487 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -322,6 +322,11 @@ static __always_inline void __folio_dup_file_rmap(struct folio *folio, switch (level) { case RMAP_LEVEL_PTE: + if (!folio_test_large(folio)) { + atomic_inc(&page->_mapcount); + break; + } + do { atomic_inc(&page->_mapcount); } while (page++, --nr_pages > 0); @@ -405,6 +410,14 @@ static __always_inline int __folio_try_dup_anon_rmap(struct folio *folio, if (PageAnonExclusive(page + i)) return -EBUSY; } + + if (!folio_test_large(folio)) { + if (PageAnonExclusive(page)) + ClearPageAnonExclusive(page); + atomic_inc(&page->_mapcount); + break; + } + do { if (PageAnonExclusive(page)) ClearPageAnonExclusive(page); diff --git a/mm/rmap.c b/mm/rmap.c index 56b313aa2ebf..4bde6d60db6c 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1172,15 +1172,18 @@ static __always_inline unsigned int __folio_add_rmap(struct folio *folio, switch (level) { case RMAP_LEVEL_PTE: + if (!folio_test_large(folio)) { + nr = atomic_inc_and_test(&page->_mapcount); + break; + } + do { first = atomic_inc_and_test(&page->_mapcount); - if (first && folio_test_large(folio)) { + if (first) { first = atomic_inc_return_relaxed(mapped); - first = (first < ENTIRELY_MAPPED); + if (first < ENTIRELY_MAPPED) + nr++; } - - if (first) - nr++; } while (page++, --nr_pages > 0); break; case RMAP_LEVEL_PMD: @@ -1514,15 +1517,18 @@ static __always_inline void __folio_remove_rmap(struct folio *folio, switch (level) { case RMAP_LEVEL_PTE: + if (!folio_test_large(folio)) { + nr = atomic_add_negative(-1, &page->_mapcount); + break; + } + do { last = atomic_add_negative(-1, &page->_mapcount); - if (last && folio_test_large(folio)) { + if (last) { last = atomic_dec_return_relaxed(mapped); - last = (last < ENTIRELY_MAPPED); + if (last < ENTIRELY_MAPPED) + nr++; } - - if (last) - nr++; } while (page++, --nr_pages > 0); break; case RMAP_LEVEL_PMD: From patchwork Tue Apr 9 19:22:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13623159 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 45F2C157A41 for ; Tue, 9 Apr 2024 19:24:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690672; cv=none; b=ameNT88jrag1tcFIiN8K+drZaj9ES97801tJzrJMCBS+BeEFGN+CKUdS9n6A43pVC/qV9LKYRiy8pPLP91Qy0kveTPct2iHdula786VaWrapKprYFb06vgG80w76BIijZ/iXmp1DaaAQ9K2DkoeA2G1c/h+fZxdbwAPUZeuTDhs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690672; c=relaxed/simple; bh=mHS7+7SOJGpgEpvVdLTxHciXo/6JLTOh2rJC9kqTmfQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LWllUy8pId/NSGdjlHWlIUTrEXG9Y1hLI3wIeDVkngyv8OSVqzRc3EcIKSxu0viUvupjh0WBd4paUHgrxcskK9PdnZCqvfl6DosM0MTjHsW99t48dLOnkb1+T3aXMPFSI4tHNRW6WtGbTL6c9xKGjUieeAajB6N1XPh7XFuGU8E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=TbRkwLfa; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="TbRkwLfa" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1712690669; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sssR1z2Cns5/cDlwFg0C+rTVPaBVC9TbP1UmPuFLGM8=; b=TbRkwLfaqj/Zjs5/hu50ZKXozAxOas77d9PXV+WlA7IIEo1OnJZHBYq53NTjMfjAcLLXMH 5OS+erJneCrdKDFI6L9Jv8skZbX7M5HPoYCxi060GYKQttDINUnYBJfTlg5bUpX0kk5Mpu IyrVaAi9Q9fQiwn7TUo85aS4MsUgo44= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-16-tXAOl0sKNr2RokYkQkyf6Q-1; Tue, 09 Apr 2024 15:24:23 -0400 X-MC-Unique: tXAOl0sKNr2RokYkQkyf6Q-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A202A3C025AD; Tue, 9 Apr 2024 19:24:22 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.192.106]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3405E40B4980; Tue, 9 Apr 2024 19:24:08 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, cgroups@vger.kernel.org, linux-sh@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Peter Xu , Ryan Roberts , Yin Fengwei , Yang Shi , Zi Yan , Jonathan Corbet , Hugh Dickins , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , Chris Zankel , Max Filippov , Muchun Song , Miaohe Lin , Naoya Horiguchi , Richard Chang Subject: [PATCH v1 04/18] mm: track mapcount of large folios in single value Date: Tue, 9 Apr 2024 21:22:47 +0200 Message-ID: <20240409192301.907377-5-david@redhat.com> In-Reply-To: <20240409192301.907377-1-david@redhat.com> References: <20240409192301.907377-1-david@redhat.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 Let's track the mapcount of large folios in a single value. The mapcount of a large folio currently corresponds to the sum of the entire mapcount and all page mapcounts. This sum is what we actually want to know in folio_mapcount() and it is also sufficient for implementing folio_mapped(). With PTE-mapped THP becoming more important and more widely used, we want to avoid looping over all pages of a folio just to obtain the mapcount of large folios. The comment "In the common case, avoid the loop when no pages mapped by PTE" in folio_total_mapcount() does no longer hold for mTHP that are always mapped by PTE. Further, we are planning on using folio_mapcount() more frequently, and might even want to remove page mapcounts for large folios in some kernel configs. Therefore, allow for reading the mapcount of large folios efficiently and atomically without looping over any pages. Maintain the mapcount also for hugetlb pages for simplicity. Use the new mapcount to implement folio_mapcount() and folio_mapped(). Make page_mapped() simply call folio_mapped(). We can now get rid of folio_large_is_mapped(). _nr_pages_mapped is now only used in rmap code and for debugging purposes. Keep folio_nr_pages_mapped() around, but document that its use should be limited to rmap internals and debugging purposes. This change implies one additional atomic add/sub whenever mapping/unmapping (parts of) a large folio. As we now batch RMAP operations for PTE-mapped THP during fork(), during unmap/zap, and when PTE-remapping a PMD-mapped THP, and we adjust the large mapcount for a PTE batch only once, the added overhead in the common case is small. Only when unmapping individual pages of a large folio (e.g., during COW), the overhead might be bigger in comparison, but it's essentially one additional atomic operation. Note that before the new mapcount would overflow, already our refcount would overflow: each mapping requires a folio reference. Extend the focumentation of folio_mapcount(). Signed-off-by: David Hildenbrand Reviewed-by: Yin Fengwei --- Documentation/mm/transhuge.rst | 12 +++++----- include/linux/mm.h | 44 ++++++++++++++++------------------ include/linux/mm_types.h | 5 ++-- include/linux/rmap.h | 10 ++++++++ mm/debug.c | 3 ++- mm/hugetlb.c | 4 ++-- mm/internal.h | 3 +++ mm/khugepaged.c | 2 +- mm/page_alloc.c | 4 ++++ mm/rmap.c | 34 +++++++++----------------- 10 files changed, 62 insertions(+), 59 deletions(-) diff --git a/Documentation/mm/transhuge.rst b/Documentation/mm/transhuge.rst index 93c9239b9ebe..1ba0ad63246c 100644 --- a/Documentation/mm/transhuge.rst +++ b/Documentation/mm/transhuge.rst @@ -116,14 +116,14 @@ pages: succeeds on tail pages. - map/unmap of a PMD entry for the whole THP increment/decrement - folio->_entire_mapcount and also increment/decrement - folio->_nr_pages_mapped by ENTIRELY_MAPPED when _entire_mapcount - goes from -1 to 0 or 0 to -1. + folio->_entire_mapcount, increment/decrement folio->_large_mapcount + and also increment/decrement folio->_nr_pages_mapped by ENTIRELY_MAPPED + when _entire_mapcount goes from -1 to 0 or 0 to -1. - map/unmap of individual pages with PTE entry increment/decrement - page->_mapcount and also increment/decrement folio->_nr_pages_mapped - when page->_mapcount goes from -1 to 0 or 0 to -1 as this counts - the number of pages mapped by PTE. + page->_mapcount, increment/decrement folio->_large_mapcount and also + increment/decrement folio->_nr_pages_mapped when page->_mapcount goes + from -1 to 0 or 0 to -1 as this counts the number of pages mapped by PTE. split_huge_page internally has to distribute the refcounts in the head page to the tail pages before clearing all PG_head/tail bits from the page diff --git a/include/linux/mm.h b/include/linux/mm.h index 0fb8a40f82dd..1862a216af15 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1239,16 +1239,26 @@ static inline int page_mapcount(struct page *page) return mapcount; } -int folio_total_mapcount(const struct folio *folio); +static inline int folio_large_mapcount(const struct folio *folio) +{ + VM_WARN_ON_FOLIO(!folio_test_large(folio), folio); + return atomic_read(&folio->_large_mapcount) + 1; +} /** - * folio_mapcount() - Calculate the number of mappings of this folio. + * folio_mapcount() - Number of mappings of this folio. * @folio: The folio. * - * A large folio tracks both how many times the entire folio is mapped, - * and how many times each individual page in the folio is mapped. - * This function calculates the total number of times the folio is - * mapped. + * The folio mapcount corresponds to the number of present user page table + * entries that reference any part of a folio. Each such present user page + * table entry must be paired with exactly on folio reference. + * + * For ordindary folios, each user page table entry (PTE/PMD/PUD/...) counts + * exactly once. + * + * For hugetlb folios, each abstracted "hugetlb" user page table entry that + * references the entire folio counts exactly once, even when such special + * page table entries are comprised of multiple ordinary page table entries. * * Return: The number of times this folio is mapped. */ @@ -1256,17 +1266,7 @@ static inline int folio_mapcount(const struct folio *folio) { if (likely(!folio_test_large(folio))) return atomic_read(&folio->_mapcount) + 1; - return folio_total_mapcount(folio); -} - -static inline bool folio_large_is_mapped(const struct folio *folio) -{ - /* - * Reading _entire_mapcount below could be omitted if hugetlb - * participated in incrementing nr_pages_mapped when compound mapped. - */ - return atomic_read(&folio->_nr_pages_mapped) > 0 || - atomic_read(&folio->_entire_mapcount) >= 0; + return folio_large_mapcount(folio); } /** @@ -1275,11 +1275,9 @@ static inline bool folio_large_is_mapped(const struct folio *folio) * * Return: True if any page in this folio is referenced by user page tables. */ -static inline bool folio_mapped(struct folio *folio) +static inline bool folio_mapped(const struct folio *folio) { - if (likely(!folio_test_large(folio))) - return atomic_read(&folio->_mapcount) >= 0; - return folio_large_is_mapped(folio); + return folio_mapcount(folio) >= 1; } /* @@ -1289,9 +1287,7 @@ static inline bool folio_mapped(struct folio *folio) */ static inline bool page_mapped(const struct page *page) { - if (likely(!PageCompound(page))) - return atomic_read(&page->_mapcount) >= 0; - return folio_large_is_mapped(page_folio(page)); + return folio_mapped(page_folio(page)); } static inline struct page *virt_to_head_page(const void *x) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 4260c595a79d..c432add95913 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -289,7 +289,8 @@ typedef struct { * @virtual: Virtual address in the kernel direct map. * @_last_cpupid: IDs of last CPU and last process that accessed the folio. * @_entire_mapcount: Do not use directly, call folio_entire_mapcount(). - * @_nr_pages_mapped: Do not use directly, call folio_mapcount(). + * @_large_mapcount: Do not use directly, call folio_mapcount(). + * @_nr_pages_mapped: Do not use outside of rmap and debug code. * @_pincount: Do not use directly, call folio_maybe_dma_pinned(). * @_folio_nr_pages: Do not use directly, call folio_nr_pages(). * @_hugetlb_subpool: Do not use directly, use accessor in hugetlb.h. @@ -348,8 +349,8 @@ struct folio { struct { unsigned long _flags_1; unsigned long _head_1; - unsigned long _folio_avail; /* public: */ + atomic_t _large_mapcount; atomic_t _entire_mapcount; atomic_t _nr_pages_mapped; atomic_t _pincount; diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 327f1ca5a487..0f906dc6d280 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -273,6 +273,7 @@ static inline int hugetlb_try_dup_anon_rmap(struct folio *folio, ClearPageAnonExclusive(&folio->page); } atomic_inc(&folio->_entire_mapcount); + atomic_inc(&folio->_large_mapcount); return 0; } @@ -306,6 +307,7 @@ static inline void hugetlb_add_file_rmap(struct folio *folio) VM_WARN_ON_FOLIO(folio_test_anon(folio), folio); atomic_inc(&folio->_entire_mapcount); + atomic_inc(&folio->_large_mapcount); } static inline void hugetlb_remove_rmap(struct folio *folio) @@ -313,11 +315,14 @@ static inline void hugetlb_remove_rmap(struct folio *folio) VM_WARN_ON_FOLIO(!folio_test_hugetlb(folio), folio); atomic_dec(&folio->_entire_mapcount); + atomic_dec(&folio->_large_mapcount); } static __always_inline void __folio_dup_file_rmap(struct folio *folio, struct page *page, int nr_pages, enum rmap_level level) { + const int orig_nr_pages = nr_pages; + __folio_rmap_sanity_checks(folio, page, nr_pages, level); switch (level) { @@ -330,9 +335,11 @@ static __always_inline void __folio_dup_file_rmap(struct folio *folio, do { atomic_inc(&page->_mapcount); } while (page++, --nr_pages > 0); + atomic_add(orig_nr_pages, &folio->_large_mapcount); break; case RMAP_LEVEL_PMD: atomic_inc(&folio->_entire_mapcount); + atomic_inc(&folio->_large_mapcount); break; } } @@ -382,6 +389,7 @@ static __always_inline int __folio_try_dup_anon_rmap(struct folio *folio, struct page *page, int nr_pages, struct vm_area_struct *src_vma, enum rmap_level level) { + const int orig_nr_pages = nr_pages; bool maybe_pinned; int i; @@ -423,6 +431,7 @@ static __always_inline int __folio_try_dup_anon_rmap(struct folio *folio, ClearPageAnonExclusive(page); atomic_inc(&page->_mapcount); } while (page++, --nr_pages > 0); + atomic_add(orig_nr_pages, &folio->_large_mapcount); break; case RMAP_LEVEL_PMD: if (PageAnonExclusive(page)) { @@ -431,6 +440,7 @@ static __always_inline int __folio_try_dup_anon_rmap(struct folio *folio, ClearPageAnonExclusive(page); } atomic_inc(&folio->_entire_mapcount); + atomic_inc(&folio->_large_mapcount); break; } return 0; diff --git a/mm/debug.c b/mm/debug.c index b71186f1fb0b..d064db42af54 100644 --- a/mm/debug.c +++ b/mm/debug.c @@ -68,8 +68,9 @@ static void __dump_folio(struct folio *folio, struct page *page, folio_ref_count(folio), mapcount, mapping, folio->index + idx, pfn); if (folio_test_large(folio)) { - pr_warn("head: order:%u entire_mapcount:%d nr_pages_mapped:%d pincount:%d\n", + pr_warn("head: order:%u mapcount:%d entire_mapcount:%d nr_pages_mapped:%d pincount:%d\n", folio_order(folio), + folio_mapcount(folio), folio_entire_mapcount(folio), folio_nr_pages_mapped(folio), atomic_read(&folio->_pincount)); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 454900c84b30..a8536349de13 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1517,7 +1517,7 @@ static void __destroy_compound_gigantic_folio(struct folio *folio, struct page *p; atomic_set(&folio->_entire_mapcount, 0); - atomic_set(&folio->_nr_pages_mapped, 0); + atomic_set(&folio->_large_mapcount, 0); atomic_set(&folio->_pincount, 0); for (i = 1; i < nr_pages; i++) { @@ -2120,7 +2120,7 @@ static bool __prep_compound_gigantic_folio(struct folio *folio, /* we rely on prep_new_hugetlb_folio to set the hugetlb flag */ folio_set_order(folio, order); atomic_set(&folio->_entire_mapcount, -1); - atomic_set(&folio->_nr_pages_mapped, 0); + atomic_set(&folio->_large_mapcount, -1); atomic_set(&folio->_pincount, 0); return true; diff --git a/mm/internal.h b/mm/internal.h index 9d3250b4a08a..51fa6246769c 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -72,6 +72,8 @@ void page_writeback_init(void); /* * How many individual pages have an elevated _mapcount. Excludes * the folio's entire_mapcount. + * + * Don't use this function outside of debugging code. */ static inline int folio_nr_pages_mapped(const struct folio *folio) { @@ -610,6 +612,7 @@ static inline void prep_compound_head(struct page *page, unsigned int order) struct folio *folio = (struct folio *)page; folio_set_order(folio, order); + atomic_set(&folio->_large_mapcount, -1); atomic_set(&folio->_entire_mapcount, -1); atomic_set(&folio->_nr_pages_mapped, 0); atomic_set(&folio->_pincount, 0); diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 89e2624fb3ff..2f73d2aa9ae8 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1358,7 +1358,7 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm, * Check if the page has any GUP (or other external) pins. * * Here the check may be racy: - * it may see total_mapcount > refcount in some cases? + * it may see folio_mapcount() > folio_ref_count(). * But such case is ephemeral we could always retry collapse * later. However it may report false positive if the page * has excessive GUP pins (i.e. 512). Anyway the same check diff --git a/mm/page_alloc.c b/mm/page_alloc.c index adbb7e6e0c72..393366d4a704 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -941,6 +941,10 @@ static int free_tail_page_prepare(struct page *head_page, struct page *page) bad_page(page, "nonzero entire_mapcount"); goto out; } + if (unlikely(folio_large_mapcount(folio))) { + bad_page(page, "nonzero large_mapcount"); + goto out; + } if (unlikely(atomic_read(&folio->_nr_pages_mapped))) { bad_page(page, "nonzero nr_pages_mapped"); goto out; diff --git a/mm/rmap.c b/mm/rmap.c index 4bde6d60db6c..2608c40dffad 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1138,34 +1138,12 @@ int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff, return page_vma_mkclean_one(&pvmw); } -int folio_total_mapcount(const struct folio *folio) -{ - int mapcount = folio_entire_mapcount(folio); - int nr_pages; - int i; - - /* In the common case, avoid the loop when no pages mapped by PTE */ - if (folio_nr_pages_mapped(folio) == 0) - return mapcount; - /* - * Add all the PTE mappings of those pages mapped by PTE. - * Limit the loop to folio_nr_pages_mapped()? - * Perhaps: given all the raciness, that may be a good or a bad idea. - */ - nr_pages = folio_nr_pages(folio); - for (i = 0; i < nr_pages; i++) - mapcount += atomic_read(&folio_page(folio, i)->_mapcount); - - /* But each of those _mapcounts was based on -1 */ - mapcount += nr_pages; - return mapcount; -} - static __always_inline unsigned int __folio_add_rmap(struct folio *folio, struct page *page, int nr_pages, enum rmap_level level, int *nr_pmdmapped) { atomic_t *mapped = &folio->_nr_pages_mapped; + const int orig_nr_pages = nr_pages; int first, nr = 0; __folio_rmap_sanity_checks(folio, page, nr_pages, level); @@ -1185,6 +1163,7 @@ static __always_inline unsigned int __folio_add_rmap(struct folio *folio, nr++; } } while (page++, --nr_pages > 0); + atomic_add(orig_nr_pages, &folio->_large_mapcount); break; case RMAP_LEVEL_PMD: first = atomic_inc_and_test(&folio->_entire_mapcount); @@ -1201,6 +1180,7 @@ static __always_inline unsigned int __folio_add_rmap(struct folio *folio, nr = 0; } } + atomic_inc(&folio->_large_mapcount); break; } return nr; @@ -1436,10 +1416,14 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, SetPageAnonExclusive(page); } + /* increment count (starts at -1) */ + atomic_set(&folio->_large_mapcount, nr - 1); atomic_set(&folio->_nr_pages_mapped, nr); } else { /* increment count (starts at -1) */ atomic_set(&folio->_entire_mapcount, 0); + /* increment count (starts at -1) */ + atomic_set(&folio->_large_mapcount, 0); atomic_set(&folio->_nr_pages_mapped, ENTIRELY_MAPPED); SetPageAnonExclusive(&folio->page); __lruvec_stat_mod_folio(folio, NR_ANON_THPS, nr); @@ -1522,6 +1506,7 @@ static __always_inline void __folio_remove_rmap(struct folio *folio, break; } + atomic_sub(nr_pages, &folio->_large_mapcount); do { last = atomic_add_negative(-1, &page->_mapcount); if (last) { @@ -1532,6 +1517,7 @@ static __always_inline void __folio_remove_rmap(struct folio *folio, } while (page++, --nr_pages > 0); break; case RMAP_LEVEL_PMD: + atomic_dec(&folio->_large_mapcount); last = atomic_add_negative(-1, &folio->_entire_mapcount); if (last) { nr = atomic_sub_return_relaxed(ENTIRELY_MAPPED, mapped); @@ -2714,6 +2700,7 @@ void hugetlb_add_anon_rmap(struct folio *folio, struct vm_area_struct *vma, VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio); atomic_inc(&folio->_entire_mapcount); + atomic_inc(&folio->_large_mapcount); if (flags & RMAP_EXCLUSIVE) SetPageAnonExclusive(&folio->page); VM_WARN_ON_FOLIO(folio_entire_mapcount(folio) > 1 && @@ -2728,6 +2715,7 @@ void hugetlb_add_new_anon_rmap(struct folio *folio, BUG_ON(address < vma->vm_start || address >= vma->vm_end); /* increment count (starts at -1) */ atomic_set(&folio->_entire_mapcount, 0); + atomic_set(&folio->_large_mapcount, 0); folio_clear_hugetlb_restore_reserve(folio); __folio_set_anon(folio, vma, address, true); SetPageAnonExclusive(&folio->page); From patchwork Tue Apr 9 19:22:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13623160 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9759B157A66 for ; Tue, 9 Apr 2024 19:24:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690683; cv=none; b=C4m0v06aRR+SEP+LMRqgeLIyEtsm8KjYbVRad6haJHnUk+j7vBDLdI+CvF7Y5UQ1LB4VcMmgJxVZvHuI+qHGCdtCQs/rUzsTGmTV2BTPlRcZKQnap9XyQK+0mlc3gLIi8SJ/mI7dfdQrtVwCvkLnN3t7FMjfDi6iSm5TsfdMook= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690683; c=relaxed/simple; bh=6G3a0Az5RUxYjW0MeAONrOSx900CkrCCjkjSacXNONc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=p5xUzeMdmRhCzmOYF3ZvCweYJ/kx3jIi/q4iE40MIFYDmQj0xZjeYuO12wbldV9lsVy8TvHy2xRpTZLJqV+uDIHe73SlkNWjQegihs3p4REIkTjL6s98rWMJRaU6JboAUkvcY2rthDuDOXfhOh1JLPBrJoCI1N17Igq8fXR0on0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=AGaeMGsV; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="AGaeMGsV" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1712690680; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Uy2oRB9rRnFFfSxz/BmLFL5vpaMUgfQlIINZ/k73n3E=; b=AGaeMGsV7044dtgbDXM7W4kg6MCu4pxSitZzG06rpN/NNR99Xz4HEyBZExE1FCSQxk9u0f q3ayjbugyt61H/Q8bDxpdPsVjWvtOkQJr0S+Or8p6AJD4kbo/S/nZ1d2m7BSOrCvcNt46I WilM2XIeESYXG+/UOkdGbcdB2K3gids= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-294-y1kOx-irPBuEYteLZHDYxQ-1; Tue, 09 Apr 2024 15:24:35 -0400 X-MC-Unique: y1kOx-irPBuEYteLZHDYxQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8046C380009F; Tue, 9 Apr 2024 19:24:33 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.192.106]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1167E40B4979; Tue, 9 Apr 2024 19:24:22 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, cgroups@vger.kernel.org, linux-sh@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Peter Xu , Ryan Roberts , Yin Fengwei , Yang Shi , Zi Yan , Jonathan Corbet , Hugh Dickins , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , Chris Zankel , Max Filippov , Muchun Song , Miaohe Lin , Naoya Horiguchi , Richard Chang Subject: [PATCH v1 05/18] mm: improve folio_likely_mapped_shared() using the mapcount of large folios Date: Tue, 9 Apr 2024 21:22:48 +0200 Message-ID: <20240409192301.907377-6-david@redhat.com> In-Reply-To: <20240409192301.907377-1-david@redhat.com> References: <20240409192301.907377-1-david@redhat.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 We can now read the mapcount of large folios very efficiently. Use it to improve our handling of partially-mappable folios, falling back to making a guess only in case the folio is not "obviously mapped shared". We can now better detect partially-mappable folios where the first page is not mapped as "mapped shared", reducing "false negatives"; but false negatives are still possible. While at it, fixup a wrong comment (false positive vs. false negative) for KSM folios. Signed-off-by: David Hildenbrand Reviewed-by: Yin Fengwei --- include/linux/mm.h | 19 +++++++++++++++++-- 1 file changed, 17 insertions(+), 2 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 1862a216af15..daf687f0e8e5 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2183,7 +2183,7 @@ static inline size_t folio_size(struct folio *folio) * indicate "mapped shared" (false positive) when two VMAs in the same MM * cover the same file range. * #. For (small) KSM folios, the return value can wrongly indicate "mapped - * shared" (false negative), when the folio is mapped multiple times into + * shared" (false positive), when the folio is mapped multiple times into * the same MM. * * Further, this function only considers current page table mappings that @@ -2200,7 +2200,22 @@ static inline size_t folio_size(struct folio *folio) */ static inline bool folio_likely_mapped_shared(struct folio *folio) { - return page_mapcount(folio_page(folio, 0)) > 1; + int mapcount = folio_mapcount(folio); + + /* Only partially-mappable folios require more care. */ + if (!folio_test_large(folio) || unlikely(folio_test_hugetlb(folio))) + return mapcount > 1; + + /* A single mapping implies "mapped exclusively". */ + if (mapcount <= 1) + return false; + + /* If any page is mapped more than once we treat it "mapped shared". */ + if (folio_entire_mapcount(folio) || mapcount > folio_nr_pages(folio)) + return true; + + /* Let's guess based on the first subpage. */ + return atomic_read(&folio->_mapcount) > 0; } #ifndef HAVE_ARCH_MAKE_PAGE_ACCESSIBLE From patchwork Tue Apr 9 19:22:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13623161 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8A1D415820A for ; Tue, 9 Apr 2024 19:24:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690692; cv=none; b=NGkB0g1rtKh4bJWG1W1sJstrrCXKXaRvppqM8OPn0w7IANxO2QwYcBxcBh7KYyt37ZZqc1x/wiivb1RV0R8kTCBrUwlxfnAsuHaW6DYAyTrW63bnLoJNVAGTKN49lC7Guz+hLkApYNiR3qSWOIr21eL5orsz4e/d0ZjJ3IksZpQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690692; c=relaxed/simple; bh=f2r6XGWVgx08KSMlFSZ6qHtz58Yi8+jxNoK6fRu4Nds=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=cGHmpAku06njxg7qsVXA/2XGpZzZvm+dbn/jHcOaP6DrkoPyZQWDIS3Cx+wad1RmeyhoANgMHMmEQ/xrpyc/TFhkw7/XCEGS0jA9exFt2yQOyr+z2zENKaIWykbP7ptSXKYGZOTVflNHEcFl6TO2tkK3WFL3nY/DavHzyNCFze4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=O9vD3Zya; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="O9vD3Zya" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1712690689; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kfmOqlR3LeI6JA5XUUI6YoG07vX8W74qIyKiUchjaJU=; b=O9vD3ZyaUBaIrZbiSV80Vt6m2bTKwKi/KMR88KN3Miq3IgK1G8hdCFed3nDh9REmSvu/Wj f8YaNdEOi99DPgEq2zCbrxCtaazNog44/o5DnAEcgu0Yhj6yFOYQeuBR1MZckZn2P+zZzt 94zjHOYO+ED/6LPjXSHUGRJVc/wk2sw= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-97-N-WWN3PRODGQvvQK2vZjIQ-1; Tue, 09 Apr 2024 15:24:44 -0400 X-MC-Unique: N-WWN3PRODGQvvQK2vZjIQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B730680B51A; Tue, 9 Apr 2024 19:24:42 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.192.106]) by smtp.corp.redhat.com (Postfix) with ESMTP id DCED040B4979; Tue, 9 Apr 2024 19:24:33 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, cgroups@vger.kernel.org, linux-sh@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Peter Xu , Ryan Roberts , Yin Fengwei , Yang Shi , Zi Yan , Jonathan Corbet , Hugh Dickins , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , Chris Zankel , Max Filippov , Muchun Song , Miaohe Lin , Naoya Horiguchi , Richard Chang Subject: [PATCH v1 06/18] mm: make folio_mapcount() return 0 for small typed folios Date: Tue, 9 Apr 2024 21:22:49 +0200 Message-ID: <20240409192301.907377-7-david@redhat.com> In-Reply-To: <20240409192301.907377-1-david@redhat.com> References: <20240409192301.907377-1-david@redhat.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 We already handle it properly for large folios. Let's also return "0" for small typed folios, like page_mapcount() currently would. Consequently, folio_mapcount() will never return negative values for typed folios, but may return negative values for underflows. Signed-off-by: David Hildenbrand Signed-off-by: David Hildenbrand --- include/linux/mm.h | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index daf687f0e8e5..d453232bba62 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1260,12 +1260,19 @@ static inline int folio_large_mapcount(const struct folio *folio) * references the entire folio counts exactly once, even when such special * page table entries are comprised of multiple ordinary page table entries. * + * Will report 0 for pages which cannot be mapped into userspace, such as + * slab, page tables and similar. + * * Return: The number of times this folio is mapped. */ static inline int folio_mapcount(const struct folio *folio) { - if (likely(!folio_test_large(folio))) - return atomic_read(&folio->_mapcount) + 1; + int mapcount; + + if (likely(!folio_test_large(folio))) { + mapcount = atomic_read(&folio->_mapcount); + return page_type_has_type(mapcount) ? 0 : mapcount + 1; + } return folio_large_mapcount(folio); } From patchwork Tue Apr 9 19:22:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13623162 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EABEA157A67 for ; Tue, 9 Apr 2024 19:25:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690704; cv=none; b=sheh9KD8nwc46r5Yv3drDPrcPIO0tA1dGVzUvavszWZvpDxjqVNpEnBzt53JYxdUDEpKSP5QBJ95asKdmkzgU6MbsacONy8OCCu2TbXNdLNZqqHSCTiq5JEUq7rQTQfmORZzR2T4Vt2pSVhMOAvDyj3Sxm59m2pr2v17drRWWVs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690704; c=relaxed/simple; bh=FAIHwezgBzib8qVcIz1KMdTokmMBm7CLgJNhlG06JSY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hny19y4wJWd5bFw6ox4jwO2cbW6SNB/UglF5Lwb5QUVs0Dsj3Jn66H+r1iTQwkeB5Yj7Jz/fSwchapUrZjo5qArseIeW9e1pI8tEMq25CAWkQO29u5v4V3/6SD4gH8kP1pPSAxhffYEEIQf6aIWqsNKoCicX53SaLWulTaWlXHA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Mj8euJmy; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Mj8euJmy" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1712690702; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6rnCGH1F82LpMt5YjVvPZTK3t5BGo8uUqDhbNYCO1mM=; b=Mj8euJmyRbkqnGtXpMHf65w26Z9md3rt3ifAm+BoSVIrjYQjF6/TWfFWxPaOn/O3w2vUo6 AenTeskHeiXKQMCTVVBG861ko4x+M35IMY9vNL5kIr91Fg82VeXT4va2j8p22Jr6YxvXmj 3Vi/Slmvart/Rh0cNZgL2NHKU7Zq9AM= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-195-HVcby02FNvy6GQIyoWw54g-1; Tue, 09 Apr 2024 15:24:57 -0400 X-MC-Unique: HVcby02FNvy6GQIyoWw54g-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6D86429AA392; Tue, 9 Apr 2024 19:24:56 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.192.106]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0A40740B497A; Tue, 9 Apr 2024 19:24:42 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, cgroups@vger.kernel.org, linux-sh@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Peter Xu , Ryan Roberts , Yin Fengwei , Yang Shi , Zi Yan , Jonathan Corbet , Hugh Dickins , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , Chris Zankel , Max Filippov , Muchun Song , Miaohe Lin , Naoya Horiguchi , Richard Chang Subject: [PATCH v1 07/18] mm/memory: use folio_mapcount() in zap_present_folio_ptes() Date: Tue, 9 Apr 2024 21:22:50 +0200 Message-ID: <20240409192301.907377-8-david@redhat.com> In-Reply-To: <20240409192301.907377-1-david@redhat.com> References: <20240409192301.907377-1-david@redhat.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 We want to limit the use of page_mapcount() to the places where it is absolutely necessary. In zap_present_folio_ptes(), let's simply check the folio mapcount(). If there is some issue, it will underflow at some point either way when unmapping. As indicated already in commit 10ebac4f95e7 ("mm/memory: optimize unmap/zap with PTE-mapped THP"), we already documented "If we ever have a cheap folio_mapcount(), we might just want to check for underflows there.". There is no change for small folios. For large folios, we'll now catch more underflows when batch-unmapping, because instead of only testing the mapcount of the first subpage, we'll test if the folio mapcount underflows. Signed-off-by: David Hildenbrand --- mm/memory.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 78422d1c7381..178492efb4af 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1502,8 +1502,7 @@ static __always_inline void zap_present_folio_ptes(struct mmu_gather *tlb, if (!delay_rmap) { folio_remove_rmap_ptes(folio, page, nr, vma); - /* Only sanity-check the first page in a batch. */ - if (unlikely(page_mapcount(page) < 0)) + if (unlikely(folio_mapcount(folio) < 0)) print_bad_pte(vma, addr, ptent, page); } From patchwork Tue Apr 9 19:22:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13623163 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3E8521586F2 for ; Tue, 9 Apr 2024 19:25:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690720; cv=none; b=LP4wWXFQfUHxbqMVYiAHc5yPkXWZmR51ofwrfbIEr/CJO+rNiPtxrytK2wB/SsN7rHfAINVC98T8pgRR82PH/TGTw4hA/hVqH7VFP+X3cB3Mu5CyglXyy9mInklyNG+LzTbNBwznu8DXo56buxPnRP/NHN5vLNSSIJtbyOB94C0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690720; c=relaxed/simple; bh=U2FYI/zDEFEDbvVedqLdfE5jV/TmoU4yobw78EI2s0U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=N2twyFE6XMGVF+LabeBnJ/xt51xcpftPayfe/BZB7ZHBbUumHEI1reb75hi+nEQQVPUEAdwu3Hp+N3bUGV51YHZsnn4dBWT8pILS8OzF6lGFK8KdiJsntikA+bO133SAzCwnOuqDy7KN49aietp49zI583CZGBkVSkRWZYpOKv0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=A5/9zM/r; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="A5/9zM/r" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1712690718; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=l2g8VbuF2c0rw+gA2SjXZjA9kizkfkgdJ7YGUNRZUqA=; b=A5/9zM/rM7JBKoNLt24FNgKWH++cfxED7fGDqn/iJdK4yEgqQcPq9p26ewUj/RhN9Lpzmu i1YIS8Jh0jmhQ/ApPJ1Pmi3GP4so/XtM4oygFSQG8MHWGUj9wL7sOxvKSEMSdSxjzt35qC P4wjhcLth2ixmeZ6HNcLf5S6uxv0hF8= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-61-9JpVPi_eMcaHlYj18Lcjpg-1; Tue, 09 Apr 2024 15:25:10 -0400 X-MC-Unique: 9JpVPi_eMcaHlYj18Lcjpg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9AB78806625; Tue, 9 Apr 2024 19:25:08 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.192.106]) by smtp.corp.redhat.com (Postfix) with ESMTP id BCB3B40B4980; Tue, 9 Apr 2024 19:24:56 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, cgroups@vger.kernel.org, linux-sh@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Peter Xu , Ryan Roberts , Yin Fengwei , Yang Shi , Zi Yan , Jonathan Corbet , Hugh Dickins , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , Chris Zankel , Max Filippov , Muchun Song , Miaohe Lin , Naoya Horiguchi , Richard Chang Subject: [PATCH v1 08/18] mm/huge_memory: use folio_mapcount() in zap_huge_pmd() sanity check Date: Tue, 9 Apr 2024 21:22:51 +0200 Message-ID: <20240409192301.907377-9-david@redhat.com> In-Reply-To: <20240409192301.907377-1-david@redhat.com> References: <20240409192301.907377-1-david@redhat.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 We want to limit the use of page_mapcount() to the places where it is absolutely necessary. Let's similarly check for folio_mapcount() underflows instead of page_mapcount() underflows like we do in zap_present_folio_ptes() now. Instead of the VM_BUG_ON(), we should actually be doing something like print_bad_pte(). For now, let's keep it simple and use WARN_ON_ONCE(), performing that check independently of DEBUG_VM. Signed-off-by: David Hildenbrand --- mm/huge_memory.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index d8d2ed80b0bf..68ac27d229ef 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1851,7 +1851,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, folio = page_folio(page); folio_remove_rmap_pmd(folio, page, vma); - VM_BUG_ON_PAGE(page_mapcount(page) < 0, page); + WARN_ON_ONCE(folio_mapcount(folio) < 0); VM_BUG_ON_PAGE(!PageHead(page), page); } else if (thp_migration_supported()) { swp_entry_t entry; From patchwork Tue Apr 9 19:22:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13623164 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6E0A6157A40 for ; Tue, 9 Apr 2024 19:25:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690727; cv=none; b=WSZ7kv3xVB3Rwi11DJZlYmPspEMsybWx+Lq7QZWMVtjU4xXcRJkeVU7lcVpo0lupe1xgVPvg/E+4TbrJtaJAy47sz/hUVzjHBFtft4M8cYxynK+sQ8J5lXNdDU3rJjyfnfIzUsgNdXBkKDLem1a7ko33N2/FnKMF4MOzOxqBOFY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690727; c=relaxed/simple; bh=8ggUQvHKaMkbaZKa6fSqKVjkq7jK/5ChUl8Qcha1vQY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=i7Y0amz/gyiKA8e7JbgJbvus0OrLWhBsbOaNQlNHHxBct5mFNgSDP+mkXnOtvfEGBGTOEW4k3nevASQ19pAfv7MU133/ok2+pqTScu3osriMIrieg5/Hc1djXMuD0sK597f3nFAC+8pNNPaBlV6LY6zPGzVeOXMPuPqaqq17e1g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=hbSXSg9k; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="hbSXSg9k" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1712690725; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UDO/LuGyFJsgzNpcvXk5nDcpkN8Vm1hlfEW8fwIXGqY=; b=hbSXSg9kWmcgN2xmphjGZN0sPt36CZWSHo2QDhaLuvRrM203vv+9WLZQLYU61cj0n2rijK Rorb08KxAAm2RTzkacmP/HkRmoHUc47Nom7Do3y4dkuqy75kRmmHHFpGOueJ3e1yiMRTpx VtDkHYCFUYs1BCfwuqELpPKaoNUZnBs= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-79-q_7BzCqOO76DTj4PZpIcDA-1; Tue, 09 Apr 2024 15:25:22 -0400 X-MC-Unique: q_7BzCqOO76DTj4PZpIcDA-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E95FE890524; Tue, 9 Apr 2024 19:25:20 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.192.106]) by smtp.corp.redhat.com (Postfix) with ESMTP id E1E2B40AE784; Tue, 9 Apr 2024 19:25:08 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, cgroups@vger.kernel.org, linux-sh@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Peter Xu , Ryan Roberts , Yin Fengwei , Yang Shi , Zi Yan , Jonathan Corbet , Hugh Dickins , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , Chris Zankel , Max Filippov , Muchun Song , Miaohe Lin , Naoya Horiguchi , Richard Chang Subject: [PATCH v1 09/18] mm/memory-failure: use folio_mapcount() in hwpoison_user_mappings() Date: Tue, 9 Apr 2024 21:22:52 +0200 Message-ID: <20240409192301.907377-10-david@redhat.com> In-Reply-To: <20240409192301.907377-1-david@redhat.com> References: <20240409192301.907377-1-david@redhat.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 We want to limit the use of page_mapcount() to the places where it is absolutely necessary. We can only unmap full folios; page_mapped(), which we check here, is translated to folio_mapped() -- based on folio_mapcount(). So let's print the folio mapcount instead. Signed-off-by: David Hildenbrand --- mm/memory-failure.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 88359a185c5f..ee2f4b8905ef 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1628,8 +1628,8 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn, unmap_success = !page_mapped(p); if (!unmap_success) - pr_err("%#lx: failed to unmap page (mapcount=%d)\n", - pfn, page_mapcount(p)); + pr_err("%#lx: failed to unmap page (folio mapcount=%d)\n", + pfn, folio_mapcount(page_folio(p))); /* * try_to_unmap() might put mlocked page in lru cache, so call From patchwork Tue Apr 9 19:22:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13623165 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BBFF8158872 for ; Tue, 9 Apr 2024 19:25:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690743; cv=none; b=uud5DcweuK09gMNyBviWNxZbW7EJkGcOpmgZIbeg/Ow+KJ9WTZfEp+3dtsGzoTDtrt4Y8aglr5g4KmDho7IxAcN+gMlDoRBft3maiUVkREhVCe/GVX5NTanXkw9zKXpT7t4BOmaNMP3MoSs/Y/uOowvnrlSoRQVnlv8d1d8KQY8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690743; c=relaxed/simple; bh=R1eBZWi0r7hSbgtg+APm7iwY8f/rz4SnnovRTb8gUI0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=r/8LXNCxEtGS9/89ApwdIkp5INVTQwg/MRZhFCMhquPnbw91yA3ZdjmwC3o+IX8augNQrSwDAWkQqCuxMI0ajO99HzhIgBmDUm/iIT7nwGsjeRDv3togEJJxNwccSYetZgOyFJZfvuH4nD6JUGNIoN/rP/RKAGfrcwGVn06DgWA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=aTRUxtZz; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="aTRUxtZz" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1712690740; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DDf71sS1dWCd5bf/gEaENI1Rp8VJQ5nXZAI6YpNXJLo=; b=aTRUxtZzo+ELqYwDEVMxlAmzooZpFivpbIZC0qChM6p5v6vtH/6im1Kz0f1j5UM9gHMoNg qOjpHc/gOQ5TQzyaAXZAqwxVJgNTyTfLayAbbhmkxPr1UEjXE+ohjX6D6jKHtzommq2Sy+ abOL8Jd62qaL+HT0UuAuw4+/h73p9IM= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-538-vMa84eYMNlusTgwrdB_i6w-1; Tue, 09 Apr 2024 15:25:35 -0400 X-MC-Unique: vMa84eYMNlusTgwrdB_i6w-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 592A9830ED2; Tue, 9 Apr 2024 19:25:33 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.192.106]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4A7E840153AE; Tue, 9 Apr 2024 19:25:21 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, cgroups@vger.kernel.org, linux-sh@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Peter Xu , Ryan Roberts , Yin Fengwei , Yang Shi , Zi Yan , Jonathan Corbet , Hugh Dickins , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , Chris Zankel , Max Filippov , Muchun Song , Miaohe Lin , Naoya Horiguchi , Richard Chang Subject: [PATCH v1 10/18] mm/page_alloc: use folio_mapped() in __alloc_contig_migrate_range() Date: Tue, 9 Apr 2024 21:22:53 +0200 Message-ID: <20240409192301.907377-11-david@redhat.com> In-Reply-To: <20240409192301.907377-1-david@redhat.com> References: <20240409192301.907377-1-david@redhat.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 We want to limit the use of page_mapcount() to the places where it is absolutely necessary. For tracing purposes, we use page_mapcount() in __alloc_contig_migrate_range(). Adding that mapcount to total_mapped sounds strange: total_migrated and total_reclaimed would count each page only once, not multiple times. But then, isolate_migratepages_range() adds each folio only once to the list. So for large folios, we would query the mapcount of the first page of the folio, which doesn't make too much sense for large folios. Let's simply use folio_mapped() * folio_nr_pages(), which makes more sense as nr_migratepages is also incremented by the number of pages in the folio in case of successful migration. Signed-off-by: David Hildenbrand --- mm/page_alloc.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 393366d4a704..40fc0f60e021 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -6389,8 +6389,12 @@ int __alloc_contig_migrate_range(struct compact_control *cc, if (trace_mm_alloc_contig_migrate_range_info_enabled()) { total_reclaimed += nr_reclaimed; - list_for_each_entry(page, &cc->migratepages, lru) - total_mapped += page_mapcount(page); + list_for_each_entry(page, &cc->migratepages, lru) { + struct folio *folio = page_folio(page); + + total_mapped += folio_mapped(folio) * + folio_nr_pages(folio); + } } ret = migrate_pages(&cc->migratepages, alloc_migration_target, From patchwork Tue Apr 9 19:22:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13623166 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C8E79158871 for ; Tue, 9 Apr 2024 19:25:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690753; cv=none; b=JWUN8bP4M1Q3weKF7g8EV98DZwUMVt7i0bSSAEaPUpg+e+Lw+cDYU751i6caasdB0uY+agbeqft2fs66BcErtBErH+EFpI/7CLXVv9//FvZ/qkahR4kfsbTXZu/qBOpJzYEpqSh3j+/f30cOd6jBElSI4pun1LH/TGkFkP4G/VQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690753; c=relaxed/simple; bh=Rq0Sd4YiurfGdBcBbt0nVkWNo0X3AhfDegChcctLqBs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mDaiiPxof4Sll8tDZ7Lwhr65Aa+BnUiKGWBpUaabakSV0hk6o4Fm9zeCq1ko0jsXvVGY4Hvhxvtl4jW6VBLM8S4656C0EV8X++ApUbqYClpelyyi5PMF+nVlh4akh6cbMjAYDpdbmmTOyWIUaKY5halvjN+bMrzbkebz1z2qS9A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=eUz3yLUq; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="eUz3yLUq" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1712690750; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=IN3zVkAYwXXWogoRwDYvxGTLC4K4Gwg+xwqZ7vjK29g=; b=eUz3yLUqc98OWjzB4TLOU6v3Ju1M1JIcKKpD+QBnc6vY4QRDNEw1dnUY5yiXNOjc0qtnOU DFSmE9pu4m2qsKr68s715ms4oRhbtc/ibLsE/zSlmhxydMbpxo8JtWqFvMcve6WDuYbu7A lv4ZIINxrTAlrNPRZ+V0OMUJP/1YuP8= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-512-p7aYNzngOOiEyTZuHcjUMg-1; Tue, 09 Apr 2024 15:25:44 -0400 X-MC-Unique: p7aYNzngOOiEyTZuHcjUMg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A0997830E7B; Tue, 9 Apr 2024 19:25:43 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.192.106]) by smtp.corp.redhat.com (Postfix) with ESMTP id AD0CF419FA38; Tue, 9 Apr 2024 19:25:33 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, cgroups@vger.kernel.org, linux-sh@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Peter Xu , Ryan Roberts , Yin Fengwei , Yang Shi , Zi Yan , Jonathan Corbet , Hugh Dickins , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , Chris Zankel , Max Filippov , Muchun Song , Miaohe Lin , Naoya Horiguchi , Richard Chang Subject: [PATCH v1 11/18] mm/migrate: use folio_likely_mapped_shared() in add_page_for_migration() Date: Tue, 9 Apr 2024 21:22:54 +0200 Message-ID: <20240409192301.907377-12-david@redhat.com> In-Reply-To: <20240409192301.907377-1-david@redhat.com> References: <20240409192301.907377-1-david@redhat.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 We want to limit the use of page_mapcount() to the places where it is absolutely necessary. In add_page_for_migration(), we actually want to check if the folio is mapped shared, to reject such folios. So let's use folio_likely_mapped_shared() instead. For small folios, fully mapped THP, and hugetlb folios, there is no change. For partially mapped, shared THP, we should now do a better job at rejecting such folios. Signed-off-by: David Hildenbrand --- mm/migrate.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/migrate.c b/mm/migrate.c index 285072bca29c..d87ce32645d4 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2140,7 +2140,7 @@ static int add_page_for_migration(struct mm_struct *mm, const void __user *p, goto out_putfolio; err = -EACCES; - if (page_mapcount(page) > 1 && !migrate_all) + if (folio_likely_mapped_shared(folio) && !migrate_all) goto out_putfolio; err = -EBUSY; From patchwork Tue Apr 9 19:22:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13623167 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C8F18157E76 for ; Tue, 9 Apr 2024 19:25:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690761; cv=none; b=XYGFYuMg8gSyckLNImnP+DvgKSFzFkDSBauzTBGrmZqimcpz42GSsH19QBeEKVsa9k0MN6W1pRDbO1F1+wZhBDnzMSkONLvtFWXtWWJfzGPYjoGZP6DwkMn7Cs5sIoMHCbb4unKJQUch6V9nHuLYBp8M4OwjK5cIjsVU3LkVLDU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690761; c=relaxed/simple; bh=B9VW3LrHsx1zgfIxNNfkZUFwrAR512abN6D1CTPxM7s=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hEoYJnzFnzwzRcskO+5XMTlrJ4gqtUvUZEaa631gOii4ebZQFT6NPDCKJymL7Oms53hKm95ODsHg6QtrBs7NSgUFAAa2tnectY6vcCFgB7CAOki9lUhtlXtL46WzoVJQSIE3I0YlkpwbcJUSpwQ1u8Pbs0UvE6+2DVxEqPebTRo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Eu34k+Kh; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Eu34k+Kh" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1712690758; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oja6emUhi7xfrQgd9sof78Ea2rpQTjm332kNskpFfoE=; b=Eu34k+KhCD4/OBuEsSgJgGNmJcFNaG5UkX52f9G36o5DX3GZxc7ZK52hFbT0akfGRV4Vkc qhRP9ITKHYlF9RfAGIKDm6BGOfig3rMB08lNbdD6mzXJ5b4zCp3uY3e2ZVgKlTScpymjfO BvSAJ3lBUkKVHBRNojH9j7/iiyhd4RA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-216-euyj5-iiOp6e2NecV838wQ-1; Tue, 09 Apr 2024 15:25:57 -0400 X-MC-Unique: euyj5-iiOp6e2NecV838wQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 262C9800198; Tue, 9 Apr 2024 19:25:56 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.192.106]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0B6EB40B497D; Tue, 9 Apr 2024 19:25:43 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, cgroups@vger.kernel.org, linux-sh@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Peter Xu , Ryan Roberts , Yin Fengwei , Yang Shi , Zi Yan , Jonathan Corbet , Hugh Dickins , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , Chris Zankel , Max Filippov , Muchun Song , Miaohe Lin , Naoya Horiguchi , Richard Chang Subject: [PATCH v1 12/18] sh/mm/cache: use folio_mapped() in copy_from_user_page() Date: Tue, 9 Apr 2024 21:22:55 +0200 Message-ID: <20240409192301.907377-13-david@redhat.com> In-Reply-To: <20240409192301.907377-1-david@redhat.com> References: <20240409192301.907377-1-david@redhat.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 We want to limit the use of page_mapcount() to the places where it is absolutely necessary. We're already using folio_mapped in copy_user_highpage() and copy_to_user_page() for a similar purpose so ... let's also simply use it for copy_from_user_page(). There is no change for small folios. Likely we won't stumble over many large folios on sh in that code either way. Signed-off-by: David Hildenbrand --- arch/sh/mm/cache.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/sh/mm/cache.c b/arch/sh/mm/cache.c index 9bcaa5619eab..d8be352e14d2 100644 --- a/arch/sh/mm/cache.c +++ b/arch/sh/mm/cache.c @@ -84,7 +84,7 @@ void copy_from_user_page(struct vm_area_struct *vma, struct page *page, { struct folio *folio = page_folio(page); - if (boot_cpu_data.dcache.n_aliases && page_mapcount(page) && + if (boot_cpu_data.dcache.n_aliases && folio_mapped(folio) && test_bit(PG_dcache_clean, &folio->flags)) { void *vfrom = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK); memcpy(dst, vfrom, len); From patchwork Tue Apr 9 19:22:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13623168 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F158D157E8D for ; Tue, 9 Apr 2024 19:26:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690776; cv=none; b=rSV6lyZiB6B5zxI3baZ4Son4BfoH8MAcQe1g18CGDDYoRZcRP0W2ZI1QZKHXLjw3mT+5VnIYsyVmB6WffJ8tQWodDRuZch0ziKNGxWctuu+bfD/lR36t3ceOpDBQLq6kS5Drm4JDjmNtwIjowkV4WMumTYz0OHVFovykzuJKyXc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690776; c=relaxed/simple; bh=omqhDgTrChEETgeMiUUkVBBhOlzU6W6WQjgKatIId+E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=TBOw6bJ+5FDfWOE7r+XFPaIxmZdoCCT/4ks5ypzbuJSBEDbuwXRVtnpDBAy6t1xeGFJgehi1bjPj8qxqN0x5/ipx1EdSVwY14TzaEosWdmCRk+FnhWKN3Z6RbURWVlwquU315Xx5Wc9IFzXiAWFz+zO7NRqvt+kpGfmdKmmpNpU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Hy2jgp6a; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Hy2jgp6a" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1712690773; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=z56f15PtBl+UvHHvSB3Qx1+Za4NemO81qqaBL/Rijfc=; b=Hy2jgp6a9WdBLZx94x/do/sXJcDdjtSmkgjijMm0Dt9pBRzLptKfB57QuYJWVyIkz2JBPY uoxrgBWxk4JGa/ImnXDq13F/vxBx7QhL/24pOL0gWI7pcL7mkFfPFUgBeUicsFL+77xuDC qdVqsreswdNdm33KiSUe89WsK9yk/SU= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-14-Ea0Nh1k0ME2JTUIPiSiprg-1; Tue, 09 Apr 2024 15:26:09 -0400 X-MC-Unique: Ea0Nh1k0ME2JTUIPiSiprg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3DD5B3C3D0CD; Tue, 9 Apr 2024 19:26:08 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.192.106]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6B0B440C6DAE; Tue, 9 Apr 2024 19:25:56 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, cgroups@vger.kernel.org, linux-sh@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Peter Xu , Ryan Roberts , Yin Fengwei , Yang Shi , Zi Yan , Jonathan Corbet , Hugh Dickins , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , Chris Zankel , Max Filippov , Muchun Song , Miaohe Lin , Naoya Horiguchi , Richard Chang Subject: [PATCH v1 13/18] mm/filemap: use folio_mapcount() in filemap_unaccount_folio() Date: Tue, 9 Apr 2024 21:22:56 +0200 Message-ID: <20240409192301.907377-14-david@redhat.com> In-Reply-To: <20240409192301.907377-1-david@redhat.com> References: <20240409192301.907377-1-david@redhat.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 We want to limit the use of page_mapcount() to the places where it is absolutely necessary. Let's use folio_mapcount() instead of filemap_unaccount_folio(). No functional change intended, because we're only dealing with small folios. Signed-off-by: David Hildenbrand --- mm/filemap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/filemap.c b/mm/filemap.c index c668e11cd6ef..d4aa82ad5b59 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -168,7 +168,7 @@ static void filemap_unaccount_folio(struct address_space *mapping, add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); if (mapping_exiting(mapping) && !folio_test_large(folio)) { - int mapcount = page_mapcount(&folio->page); + int mapcount = folio_mapcount(folio); if (folio_ref_count(folio) >= mapcount + 2) { /* From patchwork Tue Apr 9 19:22:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13623169 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3BFB8157E81 for ; Tue, 9 Apr 2024 19:26:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690783; cv=none; b=c6gdEb+K9dlQFiLZS/i0DnY0S6BocGztuTmYlDtqFRG4bWkHMdiAYiiv1tSv01xkNJ9QGIjPgqIzhkSwOjikoKgRqLfZ2ZoIlUB1k0c3EnFCAKXZxT32jPy/CWo1cTuFv+puXxbnvnTYtKeHU3TDdM/nWhfV5X5AhfZ8OpLrXkw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690783; c=relaxed/simple; bh=67HUJ37QTSZsMQ5QZmRvA1nLGckTDZres9pW7a3nEco=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Zq8XtZqTx47ko/en8H7X9vAuewgr/w8+wz8sKahiE2pEMiJtXqp7m5w6tNoXeyGWytM31bKOfcYGBmBgwZ8Qm6y/WPKNO9Wgwjp66WV+XtF4ZD95v9LmjLo/wJxelaQqY5liWIQIaG4aMFUVZBZv86C/zx1X1PLHG/m5J6qG1Mc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=PifanRhK; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="PifanRhK" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1712690781; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=P0NYc9fiw7RbFbzUOP+c9IZVrablCrwSqa05Gf/mEjM=; b=PifanRhKtHC+q6DszmwWXFttV97v5KhkTV3r2fI5cE1bRgpmNae5jnjfW/wHKQpPVSY8R5 TwYAbl6uffl9hQjoXa88gEt8mp7ip92TJ5em/WWi4RIpx7sRy14eV8T3Wg8+dQSablh+QT ZZCZrADcXiY+GotAcoIuPBCo7EAg888= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-227-LSK2VHheNtull5pEDNikGw-1; Tue, 09 Apr 2024 15:26:18 -0400 X-MC-Unique: LSK2VHheNtull5pEDNikGw-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EBA28890521; Tue, 9 Apr 2024 19:26:16 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.192.106]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9815E40B4979; Tue, 9 Apr 2024 19:26:08 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, cgroups@vger.kernel.org, linux-sh@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Peter Xu , Ryan Roberts , Yin Fengwei , Yang Shi , Zi Yan , Jonathan Corbet , Hugh Dickins , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , Chris Zankel , Max Filippov , Muchun Song , Miaohe Lin , Naoya Horiguchi , Richard Chang Subject: [PATCH v1 14/18] mm/migrate_device: use folio_mapcount() in migrate_vma_check_page() Date: Tue, 9 Apr 2024 21:22:57 +0200 Message-ID: <20240409192301.907377-15-david@redhat.com> In-Reply-To: <20240409192301.907377-1-david@redhat.com> References: <20240409192301.907377-1-david@redhat.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 We want to limit the use of page_mapcount() to the places where it is absolutely necessary. Let's convert migrate_vma_check_page() to work on a folio internally so we can remove the page_mapcount() usage. Note that we reject any large folios. There is a lot more folio conversion to be had, but that has to wait for another day. No functional change intended. Signed-off-by: David Hildenbrand --- mm/migrate_device.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/mm/migrate_device.c b/mm/migrate_device.c index d40b46ae9d65..b929b450b77c 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -324,6 +324,8 @@ static void migrate_vma_collect(struct migrate_vma *migrate) */ static bool migrate_vma_check_page(struct page *page, struct page *fault_page) { + struct folio *folio = page_folio(page); + /* * One extra ref because caller holds an extra reference, either from * isolate_lru_page() for a regular page, or migrate_vma_collect() for @@ -336,18 +338,18 @@ static bool migrate_vma_check_page(struct page *page, struct page *fault_page) * check them than regular pages, because they can be mapped with a pmd * or with a pte (split pte mapping). */ - if (PageCompound(page)) + if (folio_test_large(folio)) return false; /* Page from ZONE_DEVICE have one extra reference */ - if (is_zone_device_page(page)) + if (folio_is_zone_device(folio)) extra++; /* For file back page */ - if (page_mapping(page)) - extra += 1 + page_has_private(page); + if (folio_mapping(folio)) + extra += 1 + folio_has_private(folio); - if ((page_count(page) - extra) > page_mapcount(page)) + if ((folio_ref_count(folio) - extra) > folio_mapcount(folio)) return false; return true; From patchwork Tue Apr 9 19:22:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13623170 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 869881586D6 for ; Tue, 9 Apr 2024 19:26:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690795; cv=none; b=o+Vr6xDZLBOgnnJ0LOZBp/fV1vNR6hmsUHMYv60qHac/7/4utROFkfHc6y7VKQNGTVU7HeQx5W/lwr2/DKWgA3RAN9C2n7wNlrvgOvzgmugQnan4CtmT6BY3UnnC95vMLZkWtEwR47D8DwT5/qI2SvgBD9E5Bmx34gT47rceTQA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690795; c=relaxed/simple; bh=nXBxw9Nb3AKmpwNJSf0KetfSq5IM5scHwHyH6e628A8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JFwe8dI6XbXIBSCldv6yaRiAMH5W1p1jgGoWBJ7ed4xBdkJRsatQfpXBO+Jv9QqeEffeTv8qiqN4ebwrc6zqXA2KOnhO38loc6gHIZER0xC2aqOWh83L3d/OUyLDOSiWEdJeC5quQHCJlUEfErudj02LP1xYa+r/rZV0PinihC8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=LEKX/Zbu; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="LEKX/Zbu" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1712690792; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NywHbuKGZjP8K2r9cm/m6NcEBx0o+/6jkDGUacCW2cY=; b=LEKX/Zbuw+OASiYjzj9CcZ7bGiUReF1ubDdycU++86SYb7+56Awpfb81UEa2bOrx0YHQSN X51nIZvQUqTPVhxdfxKYePrEopu8g7azeD3Y9v5Qa3Zd53DuDg3Qlad0+B0cqeunJZdle9 mqSC7hFv12nnoficICDD1GCigOi6ZIM= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-695-RFd2M04MNAeCUlOs7BBABg-1; Tue, 09 Apr 2024 15:26:29 -0400 X-MC-Unique: RFd2M04MNAeCUlOs7BBABg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id F246E1C0512F; Tue, 9 Apr 2024 19:26:27 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.192.106]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3D11840B497B; Tue, 9 Apr 2024 19:26:17 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, cgroups@vger.kernel.org, linux-sh@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Peter Xu , Ryan Roberts , Yin Fengwei , Yang Shi , Zi Yan , Jonathan Corbet , Hugh Dickins , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , Chris Zankel , Max Filippov , Muchun Song , Miaohe Lin , Naoya Horiguchi , Richard Chang Subject: [PATCH v1 15/18] trace/events/page_ref: trace the raw page mapcount value Date: Tue, 9 Apr 2024 21:22:58 +0200 Message-ID: <20240409192301.907377-16-david@redhat.com> In-Reply-To: <20240409192301.907377-1-david@redhat.com> References: <20240409192301.907377-1-david@redhat.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 We want to limit the use of page_mapcount() to the places where it is absolutely necessary. We already trace raw page->refcount, raw page->flags and raw page->mapping, and don't involve any folios. Let's also trace the raw mapcount value that does not consider the entire mapcount of large folios, and we don't add "1" to it. When dealing with typed folios, this makes a lot more sense. ... and it's for debugging purposes only either way. Signed-off-by: David Hildenbrand --- include/trace/events/page_ref.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/trace/events/page_ref.h b/include/trace/events/page_ref.h index 8a99c1cd417b..fe33a255b7d0 100644 --- a/include/trace/events/page_ref.h +++ b/include/trace/events/page_ref.h @@ -30,7 +30,7 @@ DECLARE_EVENT_CLASS(page_ref_mod_template, __entry->pfn = page_to_pfn(page); __entry->flags = page->flags; __entry->count = page_ref_count(page); - __entry->mapcount = page_mapcount(page); + __entry->mapcount = atomic_read(&page->_mapcount); __entry->mapping = page->mapping; __entry->mt = get_pageblock_migratetype(page); __entry->val = v; @@ -79,7 +79,7 @@ DECLARE_EVENT_CLASS(page_ref_mod_and_test_template, __entry->pfn = page_to_pfn(page); __entry->flags = page->flags; __entry->count = page_ref_count(page); - __entry->mapcount = page_mapcount(page); + __entry->mapcount = atomic_read(&page->_mapcount); __entry->mapping = page->mapping; __entry->mt = get_pageblock_migratetype(page); __entry->val = v; From patchwork Tue Apr 9 19:22:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13623171 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 33FB9157A69 for ; Tue, 9 Apr 2024 19:26:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690812; cv=none; b=meNtrd3tv3Pb5c+v+u9Mw45GAO9FAL6qalCFahOANPxL0+Jcr1M7AJgitGHG2uQgbyQPjFppmNpKq+MFmrSv6oLg1awmAtfK9nstSNI/havUy1KowYzyBbfwR4Gz24CSH5HtLxG38q1P4+caIqP5eDGgd0L+0Pl0sAXAFwK3UeQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690812; c=relaxed/simple; bh=a+SVwgdqFyED+L8M3oTlZBiNzAQkOllWBVsreg72suk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=DyhJr4i/a9KV0mO+vzSjZg7bEY0RX+ATsAMhvtGVoWgmPXor6gfSOyvZC3VwGEQ5O5RmmPnz2DaOnV7NsmedHqX5yWRcesAsb+g8Bve8cVyzOux2HRegjiQp9XPiGiscBH+GgnCtXunIXnkt95SMIWysIl6AenUOEKadLbllPnE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=MoZfT7Bb; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="MoZfT7Bb" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1712690808; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NFT5r3M4f3jd1xpIDzMU21YnNFkRRFHEv0eJ+fAnhKQ=; b=MoZfT7BbkVFdjihTJtYi5xsmlRFK4ypACmsdUOB2tt8uEgw7UdEz26X3zplCeDMkToEfWz ebvjjRliBDf7F0ZgNDSK3ZGfNed1goz9b6S+pOWl54ULbmdCwxqxpoqju1Sy7TBXnbXbA6 FroTmbvtJxVyfopaGuQ0eedrzAexFT0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-654-WsCdNN_cNKiuXalsvr_glg-1; Tue, 09 Apr 2024 15:26:44 -0400 X-MC-Unique: WsCdNN_cNKiuXalsvr_glg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 89ACA811001; Tue, 9 Apr 2024 19:26:40 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.192.106]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3BA6940B4979; Tue, 9 Apr 2024 19:26:28 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, cgroups@vger.kernel.org, linux-sh@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Peter Xu , Ryan Roberts , Yin Fengwei , Yang Shi , Zi Yan , Jonathan Corbet , Hugh Dickins , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , Chris Zankel , Max Filippov , Muchun Song , Miaohe Lin , Naoya Horiguchi , Richard Chang Subject: [PATCH v1 16/18] xtensa/mm: convert check_tlb_entry() to sanity check folios Date: Tue, 9 Apr 2024 21:22:59 +0200 Message-ID: <20240409192301.907377-17-david@redhat.com> In-Reply-To: <20240409192301.907377-1-david@redhat.com> References: <20240409192301.907377-1-david@redhat.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 We want to limit the use of page_mapcount() to the places where it is absolutely necessary. So let's convert check_tlb_entry() to perform sanity checks on folios instead of pages. This essentially already happened: page_count() is mapped to folio_ref_count(), and page_mapped() to folio_mapped() internally. However, we would have printed the page_mapount(), which does not really match what page_mapped() would have checked. Let's simply print the folio mapcount to avoid using page_mapcount(). For small folios there is no change. Signed-off-by: David Hildenbrand --- arch/xtensa/mm/tlb.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/arch/xtensa/mm/tlb.c b/arch/xtensa/mm/tlb.c index 4f974b74883c..d8b60d6e50a8 100644 --- a/arch/xtensa/mm/tlb.c +++ b/arch/xtensa/mm/tlb.c @@ -256,12 +256,13 @@ static int check_tlb_entry(unsigned w, unsigned e, bool dtlb) dtlb ? 'D' : 'I', w, e, r0, r1, pte); if (pte == 0 || !pte_present(__pte(pte))) { struct page *p = pfn_to_page(r1 >> PAGE_SHIFT); - pr_err("page refcount: %d, mapcount: %d\n", - page_count(p), - page_mapcount(p)); - if (!page_count(p)) + struct folio *f = page_folio(p); + + pr_err("folio refcount: %d, mapcount: %d\n", + folio_ref_count(f), folio_mapcount(f)); + if (!folio_ref_count(f)) rc |= TLB_INSANE; - else if (page_mapcount(p)) + else if (folio_mapped(f)) rc |= TLB_SUSPICIOUS; } else { rc |= TLB_INSANE; From patchwork Tue Apr 9 19:23:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13623172 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C555A1586F8 for ; Tue, 9 Apr 2024 19:26:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690820; cv=none; b=CjLPHc1TdVpbPjUUxFudGTRynJ3GirHQHqsHD+WhVNqv2RMKKfGKYig+oWAWl+EQQu08QeZfB/Wkr0dbofJiFElz7jrfbUNdbbrlbw7AjSXYoz8hdwK5u/VXc0qJ2aGlY8RseCBvg643L7hQukoYQ7eoZsbPlvt7USBss5XfnkY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690820; c=relaxed/simple; bh=sYeMlPL5beHAUJA6nbySpZ8t2Qp6YBalKzijeijJLW4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sOOQVHtSeLoqN9yJGqrOrN/1vR3wp2/le3luoLYqs20h7CJvDRt9g72lW1ptQfw94waUM3zpYGNhTtlAnZstXuqBFAhvrE4fylyqyewS2vriJxdOHJa/LM8+wK6CI/2uym/A1TdbHE7QVbSQLbP100zkswbpEcZiU/afQCPvEWU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=W6KVD8lt; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="W6KVD8lt" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1712690817; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9BvBZKQ0eVCGMopeE4n6MDTyBxCnbPUcn/eYDU7e+Xc=; b=W6KVD8ltNPQZnCc0hqsQNI5IfVf7VaX4jrTqB0CIB1zKSgOf9oCL7w4vnKIIv7s1mjxIAv N05jKeisJIEVoYPQsfMPJ+ppjxF6eyjXErVLriVeUWNQSkCGOBuSc9fmSXHU0Mx++JhWCU EMHbG3W72i4WeHIEMqGZNa8bXqvmVPI= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-183-3faOSIxmOvCIGn74HLebCA-1; Tue, 09 Apr 2024 15:26:53 -0400 X-MC-Unique: 3faOSIxmOvCIGn74HLebCA-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 76CA5188ACAA; Tue, 9 Apr 2024 19:26:52 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.192.106]) by smtp.corp.redhat.com (Postfix) with ESMTP id C430140B497A; Tue, 9 Apr 2024 19:26:40 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, cgroups@vger.kernel.org, linux-sh@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Peter Xu , Ryan Roberts , Yin Fengwei , Yang Shi , Zi Yan , Jonathan Corbet , Hugh Dickins , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , Chris Zankel , Max Filippov , Muchun Song , Miaohe Lin , Naoya Horiguchi , Richard Chang Subject: [PATCH v1 17/18] mm/debug: print only page mapcount (excluding folio entire mapcount) in __dump_folio() Date: Tue, 9 Apr 2024 21:23:00 +0200 Message-ID: <20240409192301.907377-18-david@redhat.com> In-Reply-To: <20240409192301.907377-1-david@redhat.com> References: <20240409192301.907377-1-david@redhat.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 Let's simplify and only print the page mapcount: we already print the large folio mapcount and the entire folio mapcount for large folios separately; that should be sufficient to figure out what's happening. While at it, print the page mapcount also if it had an underflow, filtering out only typed pages. Signed-off-by: David Hildenbrand --- mm/debug.c | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/mm/debug.c b/mm/debug.c index d064db42af54..69e524c3e601 100644 --- a/mm/debug.c +++ b/mm/debug.c @@ -55,15 +55,10 @@ static void __dump_folio(struct folio *folio, struct page *page, unsigned long pfn, unsigned long idx) { struct address_space *mapping = folio_mapping(folio); - int mapcount = atomic_read(&page->_mapcount) + 1; + int mapcount = atomic_read(&page->_mapcount); char *type = ""; - /* Open-code page_mapcount() to avoid looking up a stale folio */ - if (mapcount < 0) - mapcount = 0; - if (folio_test_large(folio)) - mapcount += folio_entire_mapcount(folio); - + mapcount = page_type_has_type(mapcount) ? 0 : mapcount + 1; pr_warn("page: refcount:%d mapcount:%d mapping:%p index:%#lx pfn:%#lx\n", folio_ref_count(folio), mapcount, mapping, folio->index + idx, pfn); From patchwork Tue Apr 9 19:23:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13623173 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 043D5157480 for ; Tue, 9 Apr 2024 19:27:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690838; cv=none; b=FPQkuxx+EaK7O5iCawKcqxIXIsImzMKfv488c5zSno1jVQZOxs5GWkS1ZdAzCnjkhIoFbCxc+s3f+MSsCm9MwlNy9GYCCPshQ8R4qErlsu4hLUAkBqsq5dHDd8ttxiRqKP7lEgYfd7M8aBXstl67FMF8R4il8wZFbg/dR+fx+Hc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712690838; c=relaxed/simple; bh=zq0eZo/dQN8kHyMpocUFZ1dZ9OgFSNipiTHPXkCAd8Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=d0UYeNzJfbuyvBrGBMa+JiuCHQq1fTDMwBJ5ENiO+Ivg9upfZ30cox+z5yPgT/Y3wx7qqKPmczuy7lx5qsO9ZYMRtTj6fs7cAHsT9vg43q1aNZf6CxX5bS0WqFXhCY0VG8CEaG9+8oAf0qS9DO643CHKNql9wTXj0PyAVgDjgik= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Qqjlc9uO; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Qqjlc9uO" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1712690836; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=IIAPKUbl/14adNwXxGFYrjBvnjcRb79R1asl7bmu1AM=; b=Qqjlc9uObVNDoVp2u6r+lUync36ccKEqh+xcuZOBYebclTK1KYnBehoEZMccJ+P2VfxXzl pMD4GFLcfYxurKUzQeUpiXEkDN5n1iPiOOaotWKUnkgBz9FMgCHCVKsxVnsSPXKGhW9XkW 0Duh/kZN0COpjIvtwKZDEL0bFnkYps8= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-629-cPzfINHFMa2e8lR0ggInCw-1; Tue, 09 Apr 2024 15:27:12 -0400 X-MC-Unique: cPzfINHFMa2e8lR0ggInCw-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1A5CC830E7B; Tue, 9 Apr 2024 19:27:11 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.192.106]) by smtp.corp.redhat.com (Postfix) with ESMTP id B9D8740B4980; Tue, 9 Apr 2024 19:26:52 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, cgroups@vger.kernel.org, linux-sh@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Peter Xu , Ryan Roberts , Yin Fengwei , Yang Shi , Zi Yan , Jonathan Corbet , Hugh Dickins , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , Chris Zankel , Max Filippov , Muchun Song , Miaohe Lin , Naoya Horiguchi , Richard Chang Subject: [PATCH v1 18/18] Documentation/admin-guide/cgroup-v1/memory.rst: don't reference page_mapcount() Date: Tue, 9 Apr 2024 21:23:01 +0200 Message-ID: <20240409192301.907377-19-david@redhat.com> In-Reply-To: <20240409192301.907377-1-david@redhat.com> References: <20240409192301.907377-1-david@redhat.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 Let's stop talking about page_mapcount(). Signed-off-by: David Hildenbrand --- Documentation/admin-guide/cgroup-v1/memory.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/Documentation/admin-guide/cgroup-v1/memory.rst b/Documentation/admin-guide/cgroup-v1/memory.rst index 46110e6a31bb..9cde26d33843 100644 --- a/Documentation/admin-guide/cgroup-v1/memory.rst +++ b/Documentation/admin-guide/cgroup-v1/memory.rst @@ -802,8 +802,8 @@ a page or a swap can be moved only when it is charged to the task's current | | anonymous pages, file pages (and swaps) in the range mmapped by the task | | | will be moved even if the task hasn't done page fault, i.e. they might | | | not be the task's "RSS", but other task's "RSS" that maps the same file. | -| | And mapcount of the page is ignored (the page can be moved even if | -| | page_mapcount(page) > 1). You must enable Swap Extension (see 2.4) to | +| | The mapcount of the page is ignored (the page can be moved independent | +| | of the mapcount). You must enable Swap Extension (see 2.4) to | | | enable move of swap charges. | +---+--------------------------------------------------------------------------+