From patchwork Fri Nov 24 13:26:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13467658 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44EECC61D97 for ; Fri, 24 Nov 2023 13:26:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CEA118D0079; Fri, 24 Nov 2023 08:26:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C988D8D006E; Fri, 24 Nov 2023 08:26:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B38568D0079; Fri, 24 Nov 2023 08:26:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 9FDC78D006E for ; Fri, 24 Nov 2023 08:26:56 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 667221205D1 for ; Fri, 24 Nov 2023 13:26:56 +0000 (UTC) X-FDA: 81492923232.16.F3997E5 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf24.hostedemail.com (Postfix) with ESMTP id A2EF4180016 for ; Fri, 24 Nov 2023 13:26:54 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="WWc0PFP/"; spf=pass (imf24.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700832414; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/ZXmCy/cptnrW109242Bp79Yc/hIzrll+sgQ1b8Vj4U=; b=zBQLv8/jmF5060XN7ps3Qhk52JxbE8XD9eSvAS26xLOU/9VVEOaC/O0ToGhvzUkEups78Y 4AAvrfPCx0+9fTivy6NuOeQeCmlHv9pg6Crs/VnZWQRw0QJ22h3YRtc7RNOJrcu/aZy2p6 y6FLpbgioqr4kWc8aPYk3DrH42mV/ro= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="WWc0PFP/"; spf=pass (imf24.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700832414; a=rsa-sha256; cv=none; b=Jf6Cj5w3stSrj3uHR6uMJ8z/ZdLshpUIlslVDAryTZPaXbjfI4+11TjHU8lsG0w/p/FW2J 1wq8S/XyE5BpTUcu/JC0JaUAli7/I4MyZ3XVhSdR9GREfzL3Fy/mD+6Rr1BUp5ZWIz+QY8 IDSZj/DR3Kq9eL4HsNxqW8q3r+2xBrc= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1700832414; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/ZXmCy/cptnrW109242Bp79Yc/hIzrll+sgQ1b8Vj4U=; b=WWc0PFP/SEr2Nfpj+NLRRullOb5rnC27ee2+kLZ940wFljIKjU5o8EBIWIMB96TI0JXD0z Wd9r9mdrDRKukmwC764UPNFuQ2E2op8tjI8shDYKHhr8lVPo55Ctv1q5OSPfSAkpHxx7ls iMDNtuNQ6RAh3PUIoAd+p+HvoxiNGwE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-459-PPvWI2VbOfaXtOuRopE19A-1; Fri, 24 Nov 2023 08:26:50 -0500 X-MC-Unique: PPvWI2VbOfaXtOuRopE19A-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D82C7811E7B; Fri, 24 Nov 2023 13:26:49 +0000 (UTC) Received: from t14s.fritz.box (unknown [10.39.194.71]) by smtp.corp.redhat.com (Postfix) with ESMTP id A59242166B2A; Fri, 24 Nov 2023 13:26:46 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , Linus Torvalds , Ryan Roberts , Matthew Wilcox , Hugh Dickins , Yin Fengwei , Yang Shi , Ying Huang , Zi Yan , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , "Paul E. McKenney" Subject: [PATCH WIP v1 05/20] mm/rmap: abstract total mapcount operations for partially-mappable folios Date: Fri, 24 Nov 2023 14:26:10 +0100 Message-ID: <20231124132626.235350-6-david@redhat.com> In-Reply-To: <20231124132626.235350-1-david@redhat.com> References: <20231124132626.235350-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.6 X-Rspamd-Queue-Id: A2EF4180016 X-Rspam-User: X-Stat-Signature: np14jojjwjm65wuqd6cne13bktkfsqh4 X-Rspamd-Server: rspam01 X-HE-Tag: 1700832414-930217 X-HE-Meta: U2FsdGVkX18xnMHRV6EVdOUrm8KawUHnSfu7rPX4IuEJlnwxZxeYS8ZA0k3cn4SqcVvOfyVHzMkHSzj9VuLnVlC7Hjp8VpFKXlkf8BKtTUN8snRYsGtHTBmC61Buv5FaWsUzt1WnwyGMz6tj0kPjs55IOn6ZA5+LgFSNAa/25MQeONJwiJEfzYXFeJ9Ana47F06wg4RsyqEXSEzrmYkxxCHX41Qyjue8sg8LzvCDIMDQS5VFrloM+IjSaZuKRQyVBViy4dwHOMSpQeMMa72CRbm1c50ydCxwxxPyQx+ILr3ru4DQQg4uvw1cU6ZO2nriqBzg/XmraKW0eYDOqNVmm4vfx4q3zKY+yqzkdVSmvyJ323yjZkoeM7gvTwLkPvMwebcogdbNoOkuwv9gUixrhvIpHgsWc0mcaySKP4avatodsfSldg0wBXAs9tw+18QTp+49lX05cWySRDVBTSG4yzq6XSRyvCyE+p4n0gQz1FPQTRnW6wAIY/kB1pAi473K1vI15cckQ1p7jajkOJ1wshvX6sF3L+GQSLe/NJtTwAPBQe3QA2gSDBA/AwwbGuJNjGqvDV6Wp94tKgkI4FXIDudzvLR3j/Co4/qKqC1iL+LjiD4OcvkiknWhwmA5ec1XN7Rm+si3nMwdZ/47j3Cml2hlEUVYteRZb2Sq298T8z0kfmagEfkFfNlyiym42YbYpp1d1pZvULmeg15SGB7HHWMEPpt+ESfxsbNZJySmjkkQByaW5ZExSG+Q+G10dkjCi+a0khtnpV3yDocmKBelTph3kqfqMp4jXVVL25HPnfJWwwW6piaRmT2644S84sKwDHQEqIvdCZD+J9ydWCJ/4Ro8davGk1Bqy9pQm2bTnz+GeSBmkLHlQTRx4fhPDgTuOpFdI2TtH3xZe5WXfqQ4Ncgu6yie4BaRi6+/Bhf7qwwgCed5qhE1Txk6VWNnpQOfILoouxdDWKMrnRSyQxB jdu+ZjQP SIF4eNgSJQ9cujwcRdVDOvpt6lKJM+BHxXwcudsywPIXM2RTgkydOyXznT8AVxf0zMI8Cqp9ynbOWww++u1lkUJNDldYU2h56EAianXtHA0ph2I/Q9ceq1sF2gcOkoen+tV1sR0l45nG4QfwvnfAyLH0QBnBVe0TQsQJr5QHHoNOYPkI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's prepare for doing additional accounting whenever modifying the total mapcount of partially-mappable (!hugetlb) folios. Pass the VMA as well. Signed-off-by: David Hildenbrand --- include/linux/rmap.h | 41 ++++++++++++++++++++++++++++++++++++++++- mm/rmap.c | 23 ++++++++++++----------- 2 files changed, 52 insertions(+), 12 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 6cb497f6feab..9d5c2ed6ced5 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -168,6 +168,39 @@ static inline void anon_vma_merge(struct vm_area_struct *vma, struct anon_vma *folio_get_anon_vma(struct folio *folio); +static inline void folio_set_large_mapcount(struct folio *folio, + int count, struct vm_area_struct *vma) +{ + VM_WARN_ON_FOLIO(!folio_test_large_rmappable(folio), folio); + VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio); + /* increment count (starts at -1) */ + atomic_set(&folio->_total_mapcount, count - 1); +} + +static inline void folio_inc_large_mapcount(struct folio *folio, + struct vm_area_struct *vma) +{ + VM_WARN_ON_FOLIO(!folio_test_large_rmappable(folio), folio); + VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio); + atomic_inc(&folio->_total_mapcount); +} + +static inline void folio_add_large_mapcount(struct folio *folio, + int count, struct vm_area_struct *vma) +{ + VM_WARN_ON_FOLIO(!folio_test_large_rmappable(folio), folio); + VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio); + atomic_add(count, &folio->_total_mapcount); +} + +static inline void folio_dec_large_mapcount(struct folio *folio, + struct vm_area_struct *vma) +{ + VM_WARN_ON_FOLIO(!folio_test_large_rmappable(folio), folio); + VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio); + atomic_dec(&folio->_total_mapcount); +} + /* RMAP flags, currently only relevant for some anon rmap operations. */ typedef int __bitwise rmap_t; @@ -219,11 +252,17 @@ static inline void __page_dup_rmap(struct page *page, return; } + if (unlikely(folio_test_hugetlb(folio))) { + atomic_inc(&folio->_entire_mapcount); + atomic_inc(&folio->_total_mapcount); + return; + } + if (compound) atomic_inc(&folio->_entire_mapcount); else atomic_inc(&page->_mapcount); - atomic_inc(&folio->_total_mapcount); + folio_inc_large_mapcount(folio, dst_vma); } static inline void page_dup_file_rmap(struct page *page, diff --git a/mm/rmap.c b/mm/rmap.c index 38765796dca8..689ad85cf87e 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1105,8 +1105,8 @@ int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff, } static unsigned int __folio_add_rmap_range(struct folio *folio, - struct page *page, unsigned int nr_pages, bool compound, - int *nr_pmdmapped) + struct page *page, unsigned int nr_pages, + struct vm_area_struct *vma, bool compound, int *nr_pmdmapped) { atomic_t *mapped = &folio->_nr_pages_mapped; int first, count, nr = 0; @@ -1130,7 +1130,7 @@ static unsigned int __folio_add_rmap_range(struct folio *folio, nr++; } } while (page++, --count > 0); - atomic_add(nr_pages, &folio->_total_mapcount); + folio_add_large_mapcount(folio, nr_pages, vma); } else if (folio_test_pmd_mappable(folio)) { /* That test is redundant: it's for safety or to optimize out */ @@ -1148,7 +1148,7 @@ static unsigned int __folio_add_rmap_range(struct folio *folio, nr = 0; } } - atomic_inc(&folio->_total_mapcount); + folio_inc_large_mapcount(folio, vma); } else { VM_WARN_ON_ONCE_FOLIO(true, folio); } @@ -1258,7 +1258,8 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma, unsigned int nr, nr_pmdmapped = 0; bool compound = flags & RMAP_COMPOUND; - nr = __folio_add_rmap_range(folio, page, 1, compound, &nr_pmdmapped); + nr = __folio_add_rmap_range(folio, page, 1, vma, compound, + &nr_pmdmapped); if (nr_pmdmapped) __lruvec_stat_mod_folio(folio, NR_ANON_THPS, nr_pmdmapped); if (nr) @@ -1329,8 +1330,7 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, } if (folio_test_large(folio)) - /* increment count (starts at -1) */ - atomic_set(&folio->_total_mapcount, 0); + folio_set_large_mapcount(folio, 1, vma); __lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr); __folio_set_anon(folio, vma, address, true); @@ -1355,7 +1355,7 @@ void folio_add_file_rmap_range(struct folio *folio, struct page *page, { unsigned int nr, nr_pmdmapped = 0; - nr = __folio_add_rmap_range(folio, page, nr_pages, compound, + nr = __folio_add_rmap_range(folio, page, nr_pages, vma, compound, &nr_pmdmapped); if (nr_pmdmapped) __lruvec_stat_mod_folio(folio, folio_test_swapbacked(folio) ? @@ -1411,16 +1411,17 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma, VM_BUG_ON_PAGE(compound && !PageHead(page), page); - if (folio_test_large(folio)) - atomic_dec(&folio->_total_mapcount); - /* Hugetlb pages are not counted in NR_*MAPPED */ if (unlikely(folio_test_hugetlb(folio))) { /* hugetlb pages are always mapped with pmds */ atomic_dec(&folio->_entire_mapcount); + atomic_dec(&folio->_total_mapcount); return; } + if (folio_test_large(folio)) + folio_dec_large_mapcount(folio, vma); + /* Is page being unmapped by PTE? Is this its last map to be removed? */ if (likely(!compound)) { last = atomic_add_negative(-1, &page->_mapcount);