From patchwork Tue Nov 22 09:42:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 13052111 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52F88C433FE for ; Tue, 22 Nov 2022 09:42:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D92646B0071; Tue, 22 Nov 2022 04:42:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D42526B0073; Tue, 22 Nov 2022 04:42:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BE1848E0001; Tue, 22 Nov 2022 04:42:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id AA68A6B0071 for ; Tue, 22 Nov 2022 04:42:10 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 5F212A9DB6 for ; Tue, 22 Nov 2022 09:42:10 +0000 (UTC) X-FDA: 80160587220.29.7B224FB Received: from mail-qt1-f181.google.com (mail-qt1-f181.google.com [209.85.160.181]) by imf07.hostedemail.com (Postfix) with ESMTP id 04E844000C for ; Tue, 22 Nov 2022 09:42:09 +0000 (UTC) Received: by mail-qt1-f181.google.com with SMTP id cg5so8891425qtb.12 for ; Tue, 22 Nov 2022 01:42:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=wtRRKej5/9yOCimTMhlLaKPup3+BU7GKT4Qoy8ygbuM=; b=IxWfCfWX5pFR+HboAXEJRyomLXkrfim1eewbFErf5shbc3RbgMKZ9e2pdS56GDfZvi /jNXDOJGkbp7PojNVFZ0hTyNbQyqEv8MrkqYNAtYVXUZUecdtSTwQXdFmi0TRsy7aXVC 6s8QP7y+F+yUMQaxK5Dl5iCbK0l/7trqng5mO4U8nmYlfevBeUT75ZnN2z4ygq8KdIDk KUoEoUvJUbWKImIt6bxqO/vfHj8G08p3I+yOVM5BBfwmm+NnRwg2jocctN4RihZT3wY6 DdnfARzBKA6cWWrMRQYg3/jTE65//uOagmLhRIJNYRxCgdrvjjhJqs9kE77Km8vLN+2u 6/Ew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=wtRRKej5/9yOCimTMhlLaKPup3+BU7GKT4Qoy8ygbuM=; b=O5vgvOuVKeG5JYMryKpMbuHWMuWBIqADxVEx6EQr+8nsjStkOIAMfjnAVlsaQyEdL/ 3cfCI+4telsB98VcilKNTHOuxepgbZvPB1kRe7S5a6qeuEkk3mU2on+/8w00gYECd7HL 6CpZFykb7TtLpw/PtbhGCaXoEhtt3lU62xoyczGr89BT5WzNEmN41wnkjzdOkXTh4Tey n1bV5bsXMLMTx/lXXS9Y/dk4AjFdrvhW3OSdD+lhzpmFP81yVPLYt5GrlJ/IVFCVAVBs WO3CRclyVriRHCN77BQRKfco5yxWtCAMFda1W39DO0ZA9SCBkRw8z5N6Z8FSiGidCzin MVCg== X-Gm-Message-State: ANoB5pmOvW0YNPp0sneCaU1PjrpRnqU4IbL0Icv1LVYkXwrWfCmMGs5n +m8xMrZz4kMLNvh+sAtEKceHlg== X-Google-Smtp-Source: AA0mqf4dcRJnLHIxNKUIi1sLgVrJ6nhH7YyI5kDqocGQMo/fw/eoYPkITzeXm36BbTbALbddKnNhjA== X-Received: by 2002:ac8:7595:0:b0:3a5:226e:2677 with SMTP id s21-20020ac87595000000b003a5226e2677mr21290782qtq.141.1669110128172; Tue, 22 Nov 2022 01:42:08 -0800 (PST) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id t8-20020a37ea08000000b006b9c9b7db8bsm9816817qkj.82.2022.11.22.01.42.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Nov 2022 01:42:07 -0800 (PST) Date: Tue, 22 Nov 2022 01:42:04 -0800 (PST) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Linus Torvalds , Johannes Weiner , "Kirill A. Shutemov" , Matthew Wilcox , David Hildenbrand , Vlastimil Babka , Peter Xu , Yang Shi , John Hubbard , Mike Kravetz , Sidhartha Kumar , Muchun Song , Miaohe Lin , Naoya Horiguchi , Mina Almasry , James Houghton , Zach O'Keefe , Yu Zhao , Dan Carpenter , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 1/3] mm,thp,rmap: subpages_mapcount of PTE-mapped subpages In-Reply-To: Message-ID: References: <5f52de70-975-e94f-f141-543765736181@google.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669110130; a=rsa-sha256; cv=none; b=4FF3tbv2K+nFzEZAPTURhmstrg6osL6OmpOb/3CM07gVW4LqDe/yXqEjPo9T8/kmZSBH1J PEvzB3zrJ2LIhqQXkxiMbn+VsEjas2ER1FT6BE4iHObalkkTUdsTzB/kthXnnx7ptBFckq yD8kdVQbivzZUz9OWOIREDjcwJxgMxo= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=IxWfCfWX; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf07.hostedemail.com: domain of hughd@google.com designates 209.85.160.181 as permitted sender) smtp.mailfrom=hughd@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669110130; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wtRRKej5/9yOCimTMhlLaKPup3+BU7GKT4Qoy8ygbuM=; b=Gs3dSXS9o8WKgx8eIaB2so07Q4BDVrBnkgqrgFh67at9ynCt2/vNfMQHyZyMadYlRySNQC 3UOXk84LKhf9x29UCLRpARVlQ9ZZFQT1fz+yRJ2xExmmfr3+Idrn0ilUNnyhN6TY360BRl dVZDYbiy2TrBH1bupVxrrUzXEFrUnxo= Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=IxWfCfWX; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf07.hostedemail.com: domain of hughd@google.com designates 209.85.160.181 as permitted sender) smtp.mailfrom=hughd@google.com X-Stat-Signature: utic1zmjjh4jhqoqwhdtb4e6irypp46h X-Rspamd-Queue-Id: 04E844000C X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1669110129-446053 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Following suggestion from Linus, instead of counting every PTE map of a compound page in subpages_mapcount, just count how many of its subpages are PTE-mapped: this yields the exact number needed for NR_ANON_MAPPED and NR_FILE_MAPPED stats, without any need for a locked scan of subpages; and requires updating the count less often. This does then revert total_mapcount() and folio_mapcount() to needing a scan of subpages; but they are inherently racy, and need no locking, so Linus is right that the scans are much better done there. Plus (unlike in 6.1 and previous) subpages_mapcount lets us avoid the scan in the common case of no PTE maps. And page_mapped() and folio_mapped() remain scanless and just as efficient with the new meaning of subpages_mapcount: those are the functions which I most wanted to remove the scan from. The updated page_dup_compound_rmap() is no longer suitable for use by anon THP's __split_huge_pmd_locked(); but page_add_anon_rmap() can be used for that, so long as its VM_BUG_ON_PAGE(!PageLocked) is deleted. Evidence is that this way goes slightly faster than the previous implementation for most cases; but significantly faster in the (now scanless) pmds after ptes case, which started out at 870ms and was brought down to 495ms by the previous series, now takes around 105ms. Suggested-by: Linus Torvalds Signed-off-by: Hugh Dickins Acked-by: Kirill A. Shutemov --- v2: fix uninitialized 'first', reported by Yu Zhao and Dan Carpenter moved "mapped by PTE" comments above the !compound tests, per Kirill removed a newline (which goes away in the next patch), per Kirill Documentation/mm/transhuge.rst | 3 +- include/linux/mm.h | 52 ++++++----- include/linux/rmap.h | 9 +- mm/huge_memory.c | 2 +- mm/rmap.c | 160 ++++++++++++++------------------- 5 files changed, 107 insertions(+), 119 deletions(-) diff --git a/Documentation/mm/transhuge.rst b/Documentation/mm/transhuge.rst index 1e2a637cc607..af4c9d70321d 100644 --- a/Documentation/mm/transhuge.rst +++ b/Documentation/mm/transhuge.rst @@ -122,7 +122,8 @@ pages: - map/unmap of sub-pages with PTE entry increment/decrement ->_mapcount on relevant sub-page of the compound page, and also increment/decrement - ->subpages_mapcount, stored in first tail page of the compound page. + ->subpages_mapcount, stored in first tail page of the compound page, when + _mapcount goes from -1 to 0 or 0 to -1: counting sub-pages mapped by PTE. In order to have race-free accounting of sub-pages mapped, changes to sub-page ->_mapcount, ->subpages_mapcount and ->compound_mapcount are are all locked by bit_spin_lock of PG_locked in the first tail ->flags. diff --git a/include/linux/mm.h b/include/linux/mm.h index 8fe6276d8cc2..c9e46d4d46f2 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -828,7 +828,7 @@ static inline int head_compound_mapcount(struct page *head) } /* - * Sum of mapcounts of sub-pages, does not include compound mapcount. + * Number of sub-pages mapped by PTE, does not include compound mapcount. * Must be called only on head of compound page. */ static inline int head_subpages_mapcount(struct page *head) @@ -864,23 +864,7 @@ static inline int page_mapcount(struct page *page) return head_compound_mapcount(page) + mapcount; } -static inline int total_mapcount(struct page *page) -{ - if (likely(!PageCompound(page))) - return atomic_read(&page->_mapcount) + 1; - page = compound_head(page); - return head_compound_mapcount(page) + head_subpages_mapcount(page); -} - -/* - * Return true if this page is mapped into pagetables. - * For compound page it returns true if any subpage of compound page is mapped, - * even if this particular subpage is not itself mapped by any PTE or PMD. - */ -static inline bool page_mapped(struct page *page) -{ - return total_mapcount(page) > 0; -} +int total_compound_mapcount(struct page *head); /** * folio_mapcount() - Calculate the number of mappings of this folio. @@ -897,8 +881,20 @@ static inline int folio_mapcount(struct folio *folio) { if (likely(!folio_test_large(folio))) return atomic_read(&folio->_mapcount) + 1; - return atomic_read(folio_mapcount_ptr(folio)) + 1 + - atomic_read(folio_subpages_mapcount_ptr(folio)); + return total_compound_mapcount(&folio->page); +} + +static inline int total_mapcount(struct page *page) +{ + if (likely(!PageCompound(page))) + return atomic_read(&page->_mapcount) + 1; + return total_compound_mapcount(compound_head(page)); +} + +static inline bool folio_large_is_mapped(struct folio *folio) +{ + return atomic_read(folio_mapcount_ptr(folio)) + + atomic_read(folio_subpages_mapcount_ptr(folio)) >= 0; } /** @@ -909,7 +905,21 @@ static inline int folio_mapcount(struct folio *folio) */ static inline bool folio_mapped(struct folio *folio) { - return folio_mapcount(folio) > 0; + if (likely(!folio_test_large(folio))) + return atomic_read(&folio->_mapcount) >= 0; + return folio_large_is_mapped(folio); +} + +/* + * Return true if this page is mapped into pagetables. + * For compound page it returns true if any sub-page of compound page is mapped, + * even if this particular sub-page is not itself mapped by any PTE or PMD. + */ +static inline bool page_mapped(struct page *page) +{ + if (likely(!PageCompound(page))) + return atomic_read(&page->_mapcount) >= 0; + return folio_large_is_mapped(page_folio(page)); } static inline struct page *virt_to_head_page(const void *x) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 011a7530dc76..5dadb9a3e010 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -204,14 +204,15 @@ void hugepage_add_anon_rmap(struct page *, struct vm_area_struct *, void hugepage_add_new_anon_rmap(struct page *, struct vm_area_struct *, unsigned long address); -void page_dup_compound_rmap(struct page *page, bool compound); +void page_dup_compound_rmap(struct page *page); static inline void page_dup_file_rmap(struct page *page, bool compound) { - if (PageCompound(page)) - page_dup_compound_rmap(page, compound); - else + /* Is page being mapped by PTE? */ + if (likely(!compound)) atomic_inc(&page->_mapcount); + else + page_dup_compound_rmap(page); } /** diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 30056efc79ad..3dee8665c585 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2215,7 +2215,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, BUG_ON(!pte_none(*pte)); set_pte_at(mm, addr, pte, entry); if (!pmd_migration) - page_dup_compound_rmap(page + i, false); + page_add_anon_rmap(page + i, vma, addr, false); pte_unmap(pte); } diff --git a/mm/rmap.c b/mm/rmap.c index 4833d28c5e1a..e813785da613 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1117,55 +1117,36 @@ static void unlock_compound_mapcounts(struct page *head, bit_spin_unlock(PG_locked, &head[1].flags); } -/* - * When acting on a compound page under lock_compound_mapcounts(), avoid the - * unnecessary overhead of an actual atomic operation on its subpage mapcount. - * Return true if this is the first increment or the last decrement - * (remembering that page->_mapcount -1 represents logical mapcount 0). - */ -static bool subpage_mapcount_inc(struct page *page) -{ - int orig_mapcount = atomic_read(&page->_mapcount); - - atomic_set(&page->_mapcount, orig_mapcount + 1); - return orig_mapcount < 0; -} - -static bool subpage_mapcount_dec(struct page *page) -{ - int orig_mapcount = atomic_read(&page->_mapcount); - - atomic_set(&page->_mapcount, orig_mapcount - 1); - return orig_mapcount == 0; -} - -/* - * When mapping a THP's first pmd, or unmapping its last pmd, if that THP - * also has pte mappings, then those must be discounted: in order to maintain - * NR_ANON_MAPPED and NR_FILE_MAPPED statistics exactly, without any drift, - * and to decide when an anon THP should be put on the deferred split queue. - * This function must be called between lock_ and unlock_compound_mapcounts(). - */ -static int nr_subpages_unmapped(struct page *head, int nr_subpages) +int total_compound_mapcount(struct page *head) { - int nr = nr_subpages; + int mapcount = head_compound_mapcount(head); + int nr_subpages; int i; - /* Discount those subpages mapped by pte */ + /* In the common case, avoid the loop when no subpages mapped by PTE */ + if (head_subpages_mapcount(head) == 0) + return mapcount; + /* + * Add all the PTE mappings of those subpages mapped by PTE. + * Limit the loop, knowing that only subpages_mapcount are mapped? + * Perhaps: given all the raciness, that may be a good or a bad idea. + */ + nr_subpages = thp_nr_pages(head); for (i = 0; i < nr_subpages; i++) - if (atomic_read(&head[i]._mapcount) >= 0) - nr--; - return nr; + mapcount += atomic_read(&head[i]._mapcount); + + /* But each of those _mapcounts was based on -1 */ + mapcount += nr_subpages; + return mapcount; } /* - * page_dup_compound_rmap(), used when copying mm, or when splitting pmd, + * page_dup_compound_rmap(), used when copying mm, * provides a simple example of using lock_ and unlock_compound_mapcounts(). */ -void page_dup_compound_rmap(struct page *page, bool compound) +void page_dup_compound_rmap(struct page *head) { struct compound_mapcounts mapcounts; - struct page *head; /* * Hugetlb pages could use lock_compound_mapcounts(), like THPs do; @@ -1176,20 +1157,15 @@ void page_dup_compound_rmap(struct page *page, bool compound) * Note that hugetlb does not call page_add_file_rmap(): * here is where hugetlb shared page mapcount is raised. */ - if (PageHuge(page)) { - atomic_inc(compound_mapcount_ptr(page)); - return; - } + if (PageHuge(head)) { + atomic_inc(compound_mapcount_ptr(head)); + } else if (PageTransHuge(head)) { + /* That test is redundant: it's for safety or to optimize out */ - head = compound_head(page); - lock_compound_mapcounts(head, &mapcounts); - if (compound) { + lock_compound_mapcounts(head, &mapcounts); mapcounts.compound_mapcount++; - } else { - mapcounts.subpages_mapcount++; - subpage_mapcount_inc(page); + unlock_compound_mapcounts(head, &mapcounts); } - unlock_compound_mapcounts(head, &mapcounts); } /** @@ -1304,35 +1280,34 @@ void page_add_anon_rmap(struct page *page, struct compound_mapcounts mapcounts; int nr = 0, nr_pmdmapped = 0; bool compound = flags & RMAP_COMPOUND; - bool first; + bool first = true; if (unlikely(PageKsm(page))) lock_page_memcg(page); - else - VM_BUG_ON_PAGE(!PageLocked(page), page); - if (likely(!PageCompound(page))) { + /* Is page being mapped by PTE? Is this its first map to be added? */ + if (likely(!compound)) { first = atomic_inc_and_test(&page->_mapcount); nr = first; + if (first && PageCompound(page)) { + struct page *head = compound_head(page); + + lock_compound_mapcounts(head, &mapcounts); + mapcounts.subpages_mapcount++; + nr = !mapcounts.compound_mapcount; + unlock_compound_mapcounts(head, &mapcounts); + } + } else if (PageTransHuge(page)) { + /* That test is redundant: it's for safety or to optimize out */ - } else if (compound && PageTransHuge(page)) { lock_compound_mapcounts(page, &mapcounts); first = !mapcounts.compound_mapcount; mapcounts.compound_mapcount++; if (first) { - nr = nr_pmdmapped = thp_nr_pages(page); - if (mapcounts.subpages_mapcount) - nr = nr_subpages_unmapped(page, nr_pmdmapped); + nr_pmdmapped = thp_nr_pages(page); + nr = nr_pmdmapped - mapcounts.subpages_mapcount; } unlock_compound_mapcounts(page, &mapcounts); - } else { - struct page *head = compound_head(page); - - lock_compound_mapcounts(head, &mapcounts); - mapcounts.subpages_mapcount++; - first = subpage_mapcount_inc(page); - nr = first && !mapcounts.compound_mapcount; - unlock_compound_mapcounts(head, &mapcounts); } VM_BUG_ON_PAGE(!first && (flags & RMAP_EXCLUSIVE), page); @@ -1411,28 +1386,29 @@ void page_add_file_rmap(struct page *page, VM_BUG_ON_PAGE(compound && !PageTransHuge(page), page); lock_page_memcg(page); - if (likely(!PageCompound(page))) { + /* Is page being mapped by PTE? Is this its first map to be added? */ + if (likely(!compound)) { first = atomic_inc_and_test(&page->_mapcount); nr = first; + if (first && PageCompound(page)) { + struct page *head = compound_head(page); + + lock_compound_mapcounts(head, &mapcounts); + mapcounts.subpages_mapcount++; + nr = !mapcounts.compound_mapcount; + unlock_compound_mapcounts(head, &mapcounts); + } + } else if (PageTransHuge(page)) { + /* That test is redundant: it's for safety or to optimize out */ - } else if (compound && PageTransHuge(page)) { lock_compound_mapcounts(page, &mapcounts); first = !mapcounts.compound_mapcount; mapcounts.compound_mapcount++; if (first) { - nr = nr_pmdmapped = thp_nr_pages(page); - if (mapcounts.subpages_mapcount) - nr = nr_subpages_unmapped(page, nr_pmdmapped); + nr_pmdmapped = thp_nr_pages(page); + nr = nr_pmdmapped - mapcounts.subpages_mapcount; } unlock_compound_mapcounts(page, &mapcounts); - } else { - struct page *head = compound_head(page); - - lock_compound_mapcounts(head, &mapcounts); - mapcounts.subpages_mapcount++; - first = subpage_mapcount_inc(page); - nr = first && !mapcounts.compound_mapcount; - unlock_compound_mapcounts(head, &mapcounts); } if (nr_pmdmapped) @@ -1471,29 +1447,29 @@ void page_remove_rmap(struct page *page, lock_page_memcg(page); - /* page still mapped by someone else? */ - if (likely(!PageCompound(page))) { + /* Is page being unmapped by PTE? Is this its last map to be removed? */ + if (likely(!compound)) { last = atomic_add_negative(-1, &page->_mapcount); nr = last; + if (last && PageCompound(page)) { + struct page *head = compound_head(page); + + lock_compound_mapcounts(head, &mapcounts); + mapcounts.subpages_mapcount--; + nr = !mapcounts.compound_mapcount; + unlock_compound_mapcounts(head, &mapcounts); + } + } else if (PageTransHuge(page)) { + /* That test is redundant: it's for safety or to optimize out */ - } else if (compound && PageTransHuge(page)) { lock_compound_mapcounts(page, &mapcounts); mapcounts.compound_mapcount--; last = !mapcounts.compound_mapcount; if (last) { - nr = nr_pmdmapped = thp_nr_pages(page); - if (mapcounts.subpages_mapcount) - nr = nr_subpages_unmapped(page, nr_pmdmapped); + nr_pmdmapped = thp_nr_pages(page); + nr = nr_pmdmapped - mapcounts.subpages_mapcount; } unlock_compound_mapcounts(page, &mapcounts); - } else { - struct page *head = compound_head(page); - - lock_compound_mapcounts(head, &mapcounts); - mapcounts.subpages_mapcount--; - last = subpage_mapcount_dec(page); - nr = last && !mapcounts.compound_mapcount; - unlock_compound_mapcounts(head, &mapcounts); } if (nr_pmdmapped) { From patchwork Tue Nov 22 09:49:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 13052113 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06C1FC4332F for ; Tue, 22 Nov 2022 09:49:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 799538E0001; Tue, 22 Nov 2022 04:49:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 748686B0073; Tue, 22 Nov 2022 04:49:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5E8998E0001; Tue, 22 Nov 2022 04:49:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 497426B0071 for ; Tue, 22 Nov 2022 04:49:48 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 10AC9AB7DA for ; Tue, 22 Nov 2022 09:49:48 +0000 (UTC) X-FDA: 80160606456.03.4217163 Received: from mail-qt1-f180.google.com (mail-qt1-f180.google.com [209.85.160.180]) by imf28.hostedemail.com (Postfix) with ESMTP id B167BC0009 for ; Tue, 22 Nov 2022 09:49:47 +0000 (UTC) Received: by mail-qt1-f180.google.com with SMTP id l15so8931352qtv.4 for ; Tue, 22 Nov 2022 01:49:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=pELRtwtHEu/YzDiGisY+Ih3cIaZFFPQ6ySf1OpoHh54=; b=rTPWMSQg/Gu3gCMIF/9tGMmO9JhU/fRSJngC/basCQtAOa3mN80feUM0ooWi3Du99y iiDIia6HuzMFa7Yd9v4qgiWYIakRIzGLwFImYvAkMuIMi5O9lKJSXE13LH6yJsMQvp9V BpPBwL+xGYukhy21BxNeZ3T9/9lGYKCUTOuiJm2W1yEppTEq0DDrVUo/rhQ9BBqTiDSr opJ31VYYSdixiAKp6kEt9DSHPv5O7PlmPdJe/VBBKWyavEtUxwBxioG4Rfw9QkLNPkQD EZLVaKdgBLZeZWRtysYGN46lePtwag50tp8OeED4Y8GwQYrS4ZYROBeagOmZEfk+Ju9b 8vSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=pELRtwtHEu/YzDiGisY+Ih3cIaZFFPQ6ySf1OpoHh54=; b=WZP3GZ5U2I7Gyb3ANmYEqIDoSkfO49ae/EvjOM64tua4DLmQCmP6YlVfiZ9ENpjJCL BlLsMgdaQr8lzM12KTK0I8hreW6vGYo2dHjOLLWipU8bfynKzq24DFbkzNZ7NzL8AKp3 QpGaOguLKslMjNBoqFsXxvi7XmsLzRnSDGLptqEnB5i8xTLFAM9Cta522JEuOL+qeJk5 +1aupYqCpTVLP4i+yrjUzLwwNE2Sj5xEswnkR/Kq8fhzdhh/TVWF+zryGI3wBT8g4i10 IFbtloXazcgiiec8tJHvU+V++W+GAv8JAQnxmvQ8essCNt5N7C7nfm3JlS3Ff9J9mIyR HqYg== X-Gm-Message-State: ANoB5plRwipQgwl+BfFFgT07+TWgyAneAxCyXhTN9AYMtlIJqMuwDnYz cd64UKYa9AT2lq95MqtT/r9DIg== X-Google-Smtp-Source: AA0mqf4X1RJ39Zu29Uax5e3rode16tXmXdUyixPJ95EI8zTcweC+aXZnbi3mDI/HHGu+43FKGyrEJg== X-Received: by 2002:a05:622a:5c90:b0:3a5:6de2:b400 with SMTP id ge16-20020a05622a5c9000b003a56de2b400mr20746125qtb.631.1669110586660; Tue, 22 Nov 2022 01:49:46 -0800 (PST) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id r8-20020ae9d608000000b006ee7e223bb8sm9730919qkk.39.2022.11.22.01.49.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Nov 2022 01:49:45 -0800 (PST) Date: Tue, 22 Nov 2022 01:49:36 -0800 (PST) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Linus Torvalds , Johannes Weiner , "Kirill A. Shutemov" , Matthew Wilcox , David Hildenbrand , Vlastimil Babka , Peter Xu , Yang Shi , John Hubbard , Mike Kravetz , Sidhartha Kumar , Muchun Song , Miaohe Lin , Naoya Horiguchi , Mina Almasry , James Houghton , Zach O'Keefe , Yu Zhao , Dan Carpenter , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 2/3] mm,thp,rmap: subpages_mapcount COMPOUND_MAPPED if PMD-mapped In-Reply-To: Message-ID: <3978f3ca-5473-55a7-4e14-efea5968d892@google.com> References: <5f52de70-975-e94f-f141-543765736181@google.com> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=rTPWMSQg; spf=pass (imf28.hostedemail.com: domain of hughd@google.com designates 209.85.160.180 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669110587; a=rsa-sha256; cv=none; b=bpRrfmPqK60+u7Daht8pwqlP3KqokTJ9xIkVNywvccyTtWuRffh21nQ85G5bAHv8rftzXN AEEBGWAWfWOmV+o5kRkr5oiLVWgj6ZKo48qUKy72HeU8+Hir8ZSNNCWUkAklo7rd0qSfGH 6mSy0sVTgxTtd827e3DtCR0nQxFPBG4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669110587; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=pELRtwtHEu/YzDiGisY+Ih3cIaZFFPQ6ySf1OpoHh54=; b=jkNEGEKhYnos33RDgZBKs6Yp/T4Se+qLMQDMRA3dB1vNkfWnhUbux0YtcXpU+vZ2Iodxne DtZ8tm2FnerHpfpbywjtjesttmFu0bErcyoIloH67H+Fq/o2Ag8Ll9j8iHKlw5EHOGSPbt kPZP8f+N5WJuY8WxZ2wyWNEk5M13aSw= X-Stat-Signature: cd5t9xwazauwo1sq8wxwhf7uusmahmhz X-Rspamd-Server: rspam08 X-Rspam-User: Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=rTPWMSQg; spf=pass (imf28.hostedemail.com: domain of hughd@google.com designates 209.85.160.180 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Queue-Id: B167BC0009 X-HE-Tag: 1669110587-263965 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Can the lock_compound_mapcount() bit_spin_lock apparatus be removed now? Yes. Not by atomic64_t or cmpxchg games, those get difficult on 32-bit; but if we slightly abuse subpages_mapcount by additionally demanding that one bit be set there when the compound page is PMD-mapped, then a cascade of two atomic ops is able to maintain the stats without bit_spin_lock. This is harder to reason about than when bit_spin_locked, but I believe safe; and no drift in stats detected when testing. When there are racing removes and adds, of course the sequence of operations is less well- defined; but each operation on subpages_mapcount is atomically good. What might be disastrous, is if subpages_mapcount could ever fleetingly appear negative: but the pte lock (or pmd lock) these rmap functions are called under, ensures that a last remove cannot race ahead of a first add. Continue to make an exception for hugetlb (PageHuge) pages, though that exception can be easily removed by a further commit if necessary: leave subpages_mapcount 0, don't bother with COMPOUND_MAPPED in its case, just carry on checking compound_mapcount too in folio_mapped(), page_mapped(). Evidence is that this way goes slightly faster than the previous implementation in all cases (pmds after ptes now taking around 103ms); and relieves us of worrying about contention on the bit_spin_lock. Signed-off-by: Hugh Dickins Acked-by: Kirill A. Shutemov --- v2: head_subpages_mapcount() apply the SUBPAGES_MAPPED mask, per Kirill (which consequently modifies mm/page_alloc.c instead of mm/debug.c) reverse order of reads in folio_large_is_mapped(), per Kirill Documentation/mm/transhuge.rst | 7 +- include/linux/mm.h | 19 +++++- include/linux/rmap.h | 13 ++-- mm/page_alloc.c | 2 +- mm/rmap.c | 121 +++++++-------------------------- 5 files changed, 51 insertions(+), 111 deletions(-) diff --git a/Documentation/mm/transhuge.rst b/Documentation/mm/transhuge.rst index af4c9d70321d..ec3dc5b04226 100644 --- a/Documentation/mm/transhuge.rst +++ b/Documentation/mm/transhuge.rst @@ -118,15 +118,14 @@ pages: succeeds on tail pages. - map/unmap of PMD entry for the whole compound page increment/decrement - ->compound_mapcount, stored in the first tail page of the compound page. + ->compound_mapcount, stored in the first tail page of the compound page; + and also increment/decrement ->subpages_mapcount (also in the first tail) + by COMPOUND_MAPPED when compound_mapcount goes from -1 to 0 or 0 to -1. - map/unmap of sub-pages with PTE entry increment/decrement ->_mapcount on relevant sub-page of the compound page, and also increment/decrement ->subpages_mapcount, stored in first tail page of the compound page, when _mapcount goes from -1 to 0 or 0 to -1: counting sub-pages mapped by PTE. - In order to have race-free accounting of sub-pages mapped, changes to - sub-page ->_mapcount, ->subpages_mapcount and ->compound_mapcount are - are all locked by bit_spin_lock of PG_locked in the first tail ->flags. split_huge_page internally has to distribute the refcounts in the head page to the tail pages before clearing all PG_head/tail bits from the page diff --git a/include/linux/mm.h b/include/linux/mm.h index c9e46d4d46f2..d8de9f63c376 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -827,13 +827,22 @@ static inline int head_compound_mapcount(struct page *head) return atomic_read(compound_mapcount_ptr(head)) + 1; } +/* + * If a 16GB hugetlb page were mapped by PTEs of all of its 4kB sub-pages, + * its subpages_mapcount would be 0x400000: choose the COMPOUND_MAPPED bit + * above that range, instead of 2*(PMD_SIZE/PAGE_SIZE). Hugetlb currently + * leaves subpages_mapcount at 0, but avoid surprise if it participates later. + */ +#define COMPOUND_MAPPED 0x800000 +#define SUBPAGES_MAPPED (COMPOUND_MAPPED - 1) + /* * Number of sub-pages mapped by PTE, does not include compound mapcount. * Must be called only on head of compound page. */ static inline int head_subpages_mapcount(struct page *head) { - return atomic_read(subpages_mapcount_ptr(head)); + return atomic_read(subpages_mapcount_ptr(head)) & SUBPAGES_MAPPED; } /* @@ -893,8 +902,12 @@ static inline int total_mapcount(struct page *page) static inline bool folio_large_is_mapped(struct folio *folio) { - return atomic_read(folio_mapcount_ptr(folio)) + - atomic_read(folio_subpages_mapcount_ptr(folio)) >= 0; + /* + * Reading folio_mapcount_ptr() below could be omitted if hugetlb + * participated in incrementing subpages_mapcount when compound mapped. + */ + return atomic_read(folio_subpages_mapcount_ptr(folio)) > 0 || + atomic_read(folio_mapcount_ptr(folio)) >= 0; } /** diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 5dadb9a3e010..bd3504d11b15 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -204,15 +204,14 @@ void hugepage_add_anon_rmap(struct page *, struct vm_area_struct *, void hugepage_add_new_anon_rmap(struct page *, struct vm_area_struct *, unsigned long address); -void page_dup_compound_rmap(struct page *page); +static inline void __page_dup_rmap(struct page *page, bool compound) +{ + atomic_inc(compound ? compound_mapcount_ptr(page) : &page->_mapcount); +} static inline void page_dup_file_rmap(struct page *page, bool compound) { - /* Is page being mapped by PTE? */ - if (likely(!compound)) - atomic_inc(&page->_mapcount); - else - page_dup_compound_rmap(page); + __page_dup_rmap(page, compound); } /** @@ -261,7 +260,7 @@ static inline int page_try_dup_anon_rmap(struct page *page, bool compound, * the page R/O into both processes. */ dup: - page_dup_file_rmap(page, compound); + __page_dup_rmap(page, compound); return 0; } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index f7a63684e6c4..400c51d06939 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1330,7 +1330,7 @@ static int free_tail_pages_check(struct page *head_page, struct page *page) bad_page(page, "nonzero compound_mapcount"); goto out; } - if (unlikely(head_subpages_mapcount(head_page))) { + if (unlikely(atomic_read(subpages_mapcount_ptr(head_page)))) { bad_page(page, "nonzero subpages_mapcount"); goto out; } diff --git a/mm/rmap.c b/mm/rmap.c index e813785da613..459dc1c44d8a 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1085,38 +1085,6 @@ int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff, return page_vma_mkclean_one(&pvmw); } -struct compound_mapcounts { - unsigned int compound_mapcount; - unsigned int subpages_mapcount; -}; - -/* - * lock_compound_mapcounts() first locks, then copies subpages_mapcount and - * compound_mapcount from head[1].compound_mapcount and subpages_mapcount, - * converting from struct page's internal representation to logical count - * (that is, adding 1 to compound_mapcount to hide its offset by -1). - */ -static void lock_compound_mapcounts(struct page *head, - struct compound_mapcounts *local) -{ - bit_spin_lock(PG_locked, &head[1].flags); - local->compound_mapcount = atomic_read(compound_mapcount_ptr(head)) + 1; - local->subpages_mapcount = atomic_read(subpages_mapcount_ptr(head)); -} - -/* - * After caller has updated subpage._mapcount, local subpages_mapcount and - * local compound_mapcount, as necessary, unlock_compound_mapcounts() converts - * and copies them back to the compound head[1] fields, and then unlocks. - */ -static void unlock_compound_mapcounts(struct page *head, - struct compound_mapcounts *local) -{ - atomic_set(compound_mapcount_ptr(head), local->compound_mapcount - 1); - atomic_set(subpages_mapcount_ptr(head), local->subpages_mapcount); - bit_spin_unlock(PG_locked, &head[1].flags); -} - int total_compound_mapcount(struct page *head) { int mapcount = head_compound_mapcount(head); @@ -1140,34 +1108,6 @@ int total_compound_mapcount(struct page *head) return mapcount; } -/* - * page_dup_compound_rmap(), used when copying mm, - * provides a simple example of using lock_ and unlock_compound_mapcounts(). - */ -void page_dup_compound_rmap(struct page *head) -{ - struct compound_mapcounts mapcounts; - - /* - * Hugetlb pages could use lock_compound_mapcounts(), like THPs do; - * but at present they are still being managed by atomic operations: - * which are likely to be somewhat faster, so don't rush to convert - * them over without evaluating the effect. - * - * Note that hugetlb does not call page_add_file_rmap(): - * here is where hugetlb shared page mapcount is raised. - */ - if (PageHuge(head)) { - atomic_inc(compound_mapcount_ptr(head)); - } else if (PageTransHuge(head)) { - /* That test is redundant: it's for safety or to optimize out */ - - lock_compound_mapcounts(head, &mapcounts); - mapcounts.compound_mapcount++; - unlock_compound_mapcounts(head, &mapcounts); - } -} - /** * page_move_anon_rmap - move a page to our anon_vma * @page: the page to move to our anon_vma @@ -1277,7 +1217,7 @@ static void __page_check_anon_rmap(struct page *page, void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma, unsigned long address, rmap_t flags) { - struct compound_mapcounts mapcounts; + atomic_t *mapped; int nr = 0, nr_pmdmapped = 0; bool compound = flags & RMAP_COMPOUND; bool first = true; @@ -1290,24 +1230,20 @@ void page_add_anon_rmap(struct page *page, first = atomic_inc_and_test(&page->_mapcount); nr = first; if (first && PageCompound(page)) { - struct page *head = compound_head(page); - - lock_compound_mapcounts(head, &mapcounts); - mapcounts.subpages_mapcount++; - nr = !mapcounts.compound_mapcount; - unlock_compound_mapcounts(head, &mapcounts); + mapped = subpages_mapcount_ptr(compound_head(page)); + nr = atomic_inc_return_relaxed(mapped); + nr = !(nr & COMPOUND_MAPPED); } } else if (PageTransHuge(page)) { /* That test is redundant: it's for safety or to optimize out */ - lock_compound_mapcounts(page, &mapcounts); - first = !mapcounts.compound_mapcount; - mapcounts.compound_mapcount++; + first = atomic_inc_and_test(compound_mapcount_ptr(page)); if (first) { + mapped = subpages_mapcount_ptr(page); + nr = atomic_add_return_relaxed(COMPOUND_MAPPED, mapped); nr_pmdmapped = thp_nr_pages(page); - nr = nr_pmdmapped - mapcounts.subpages_mapcount; + nr = nr_pmdmapped - (nr & SUBPAGES_MAPPED); } - unlock_compound_mapcounts(page, &mapcounts); } VM_BUG_ON_PAGE(!first && (flags & RMAP_EXCLUSIVE), page); @@ -1360,6 +1296,7 @@ void page_add_new_anon_rmap(struct page *page, VM_BUG_ON_PAGE(!PageTransHuge(page), page); /* increment count (starts at -1) */ atomic_set(compound_mapcount_ptr(page), 0); + atomic_set(subpages_mapcount_ptr(page), COMPOUND_MAPPED); nr = thp_nr_pages(page); __mod_lruvec_page_state(page, NR_ANON_THPS, nr); } @@ -1379,7 +1316,7 @@ void page_add_new_anon_rmap(struct page *page, void page_add_file_rmap(struct page *page, struct vm_area_struct *vma, bool compound) { - struct compound_mapcounts mapcounts; + atomic_t *mapped; int nr = 0, nr_pmdmapped = 0; bool first; @@ -1391,24 +1328,20 @@ void page_add_file_rmap(struct page *page, first = atomic_inc_and_test(&page->_mapcount); nr = first; if (first && PageCompound(page)) { - struct page *head = compound_head(page); - - lock_compound_mapcounts(head, &mapcounts); - mapcounts.subpages_mapcount++; - nr = !mapcounts.compound_mapcount; - unlock_compound_mapcounts(head, &mapcounts); + mapped = subpages_mapcount_ptr(compound_head(page)); + nr = atomic_inc_return_relaxed(mapped); + nr = !(nr & COMPOUND_MAPPED); } } else if (PageTransHuge(page)) { /* That test is redundant: it's for safety or to optimize out */ - lock_compound_mapcounts(page, &mapcounts); - first = !mapcounts.compound_mapcount; - mapcounts.compound_mapcount++; + first = atomic_inc_and_test(compound_mapcount_ptr(page)); if (first) { + mapped = subpages_mapcount_ptr(page); + nr = atomic_add_return_relaxed(COMPOUND_MAPPED, mapped); nr_pmdmapped = thp_nr_pages(page); - nr = nr_pmdmapped - mapcounts.subpages_mapcount; + nr = nr_pmdmapped - (nr & SUBPAGES_MAPPED); } - unlock_compound_mapcounts(page, &mapcounts); } if (nr_pmdmapped) @@ -1432,7 +1365,7 @@ void page_add_file_rmap(struct page *page, void page_remove_rmap(struct page *page, struct vm_area_struct *vma, bool compound) { - struct compound_mapcounts mapcounts; + atomic_t *mapped; int nr = 0, nr_pmdmapped = 0; bool last; @@ -1452,24 +1385,20 @@ void page_remove_rmap(struct page *page, last = atomic_add_negative(-1, &page->_mapcount); nr = last; if (last && PageCompound(page)) { - struct page *head = compound_head(page); - - lock_compound_mapcounts(head, &mapcounts); - mapcounts.subpages_mapcount--; - nr = !mapcounts.compound_mapcount; - unlock_compound_mapcounts(head, &mapcounts); + mapped = subpages_mapcount_ptr(compound_head(page)); + nr = atomic_dec_return_relaxed(mapped); + nr = !(nr & COMPOUND_MAPPED); } } else if (PageTransHuge(page)) { /* That test is redundant: it's for safety or to optimize out */ - lock_compound_mapcounts(page, &mapcounts); - mapcounts.compound_mapcount--; - last = !mapcounts.compound_mapcount; + last = atomic_add_negative(-1, compound_mapcount_ptr(page)); if (last) { + mapped = subpages_mapcount_ptr(page); + nr = atomic_sub_return_relaxed(COMPOUND_MAPPED, mapped); nr_pmdmapped = thp_nr_pages(page); - nr = nr_pmdmapped - mapcounts.subpages_mapcount; + nr = nr_pmdmapped - (nr & SUBPAGES_MAPPED); } - unlock_compound_mapcounts(page, &mapcounts); } if (nr_pmdmapped) { From patchwork Tue Nov 22 09:51:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 13052114 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E42C2C433FE for ; Tue, 22 Nov 2022 09:51:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 796ED6B0071; Tue, 22 Nov 2022 04:51:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 746776B0073; Tue, 22 Nov 2022 04:51:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 60F228E0001; Tue, 22 Nov 2022 04:51:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 527306B0071 for ; Tue, 22 Nov 2022 04:51:56 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 25D141C37A0 for ; Tue, 22 Nov 2022 09:51:56 +0000 (UTC) X-FDA: 80160611832.16.FC9B779 Received: from mail-qt1-f180.google.com (mail-qt1-f180.google.com [209.85.160.180]) by imf24.hostedemail.com (Postfix) with ESMTP id DCFD8180012 for ; Tue, 22 Nov 2022 09:51:55 +0000 (UTC) Received: by mail-qt1-f180.google.com with SMTP id s4so8921757qtx.6 for ; Tue, 22 Nov 2022 01:51:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=d3Ih39NMegZAhP2Qu/k/4zxkPIuPc7SBENROtpvFlxw=; b=DSEt13C4pf2sdQMt8lU0G2oqdagEd8Wg/0MSUb37Kr7CrOmTWHPLhkVpmbZciGuNS9 C8t/tI1CS+zRtzxXE5QADcgZLSJCWgPDlj8nmToiDwBydTdUx1eGXnzTKx6d4qr5lTrE DuYsCDI7UEpY3hZJVcgesjAvUuLBj83LldT8WJXRuWNH1z5a4+JfDyC2Uok9yQhaQxqn WTODQQSG5ouJQ13FeuU9VI7mf+8wihbLKoDgPFO4RdB0noECutAd9ilXZ7o85DkLcqsm FJ13Eaqs7CwdkfpBYOyVsc04MGLxm70lB2bNnbYxQOtH2Waeb8NcJ9OdQ84F03J/3/GY 72Mg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=d3Ih39NMegZAhP2Qu/k/4zxkPIuPc7SBENROtpvFlxw=; b=otFbKv9PdJ1Krh0aIiNfu6lekhnwRrn6EkbjjaqPdhZhyMLAddM4uWj0bz0GdVLxdJ yiQIWpQfWGpvzBIGzXcC02t5pOhJlNJuG8Arh0zbfgEE2uhWoBzHX2DXVYgM3lw7aXon XCUF6987ikEnwdTNmciNR11ifliPU5aS+98XoPwerPFLx7u7+GfcLg43APMCM4Ei5b7p tfxnHintlWdnqtLI+ejUhAatyekEYvb4sOSp24q4fij++2xKo+Cp76ZaujEhh8uTAK30 6tz5WDLZOgwVny4KmIGgVtICljzSwtqBXbDM6yobDCWYE3AXGXOUPuepNxjAKBVyv9hT mr2g== X-Gm-Message-State: ANoB5pkokM8ilOD/uRmBDGvIh4CJ5CxrXHmbTvxnb8dUpy+6/k1oLRa+ RKB+ZzOGta4KixsldH7Op7jwIQ== X-Google-Smtp-Source: AA0mqf4JAFxf1BA3eOCanBBGSqdFaxKzFrzfW6x8iVMu2sZfmFx/p6NkCnNFdJDaTwbFQrMWZ+/NGw== X-Received: by 2002:a05:622a:18a7:b0:3a5:62b5:9093 with SMTP id v39-20020a05622a18a700b003a562b59093mr10794835qtc.252.1669110715048; Tue, 22 Nov 2022 01:51:55 -0800 (PST) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id ci14-20020a05622a260e00b00398df095cf5sm1349951qtb.34.2022.11.22.01.51.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Nov 2022 01:51:53 -0800 (PST) Date: Tue, 22 Nov 2022 01:51:50 -0800 (PST) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Linus Torvalds , Johannes Weiner , "Kirill A. Shutemov" , Matthew Wilcox , David Hildenbrand , Vlastimil Babka , Peter Xu , Yang Shi , John Hubbard , Mike Kravetz , Sidhartha Kumar , Muchun Song , Miaohe Lin , Naoya Horiguchi , Mina Almasry , James Houghton , Zach O'Keefe , Yu Zhao , Dan Carpenter , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 3/3] mm,thp,rmap: clean up the end of __split_huge_pmd_locked() In-Reply-To: Message-ID: References: <5f52de70-975-e94f-f141-543765736181@google.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669110715; a=rsa-sha256; cv=none; b=O1YnDVbkWTHJjHFQihPGUG6TzvWYlvNC2Pl/fzGJJBWsmq+7p1cU2Jm1hPRvvoWfCH1vy8 ZdD15Pwq6RI9hqsxzaKDs58h8tj7O316O3h21GGY2Zev7U3AXHkIZ+0W41QqxoPw8yqCFV 61Noc3wC+cQM/16kUnYbg8J2cQDIbkY= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=DSEt13C4; spf=pass (imf24.hostedemail.com: domain of hughd@google.com designates 209.85.160.180 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669110715; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=d3Ih39NMegZAhP2Qu/k/4zxkPIuPc7SBENROtpvFlxw=; b=cch+JsQ/chOPbyPmlqLAAH46YSiKH2l6g9V6yMAR2VrwaVSKDEHUJkRRNIrxRr+YYCpk7k oolHvu97QXXP34EUklCT3WpipE33YI+92/kAvCrfpAr22YafPMmGa5W0rosyUOGKzME5By yAn5J74S8N2pVFJ2uXcZ38Fw9RXE0g0= X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: DCFD8180012 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=DSEt13C4; spf=pass (imf24.hostedemail.com: domain of hughd@google.com designates 209.85.160.180 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Stat-Signature: ibuyrswgmbmyw1r475af44nqgjzcb7kw X-HE-Tag: 1669110715-192160 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: It's hard to add a page_add_anon_rmap() into __split_huge_pmd_locked()'s HPAGE_PMD_NR set_pte_at() loop, without wincing at the "freeze" case's HPAGE_PMD_NR page_remove_rmap() loop below it. It's just a mistake to add rmaps in the "freeze" (insert migration entries prior to splitting huge page) case: the pmd_migration case already avoids doing that, so just follow its lead. page_add_ref() versus put_page() likewise. But why is one more put_page() needed in the "freeze" case? Because it's removing the pmd rmap, already removed when pmd_migration (and freeze and pmd_migration are mutually exclusive cases). Signed-off-by: Hugh Dickins Acked-by: Kirill A. Shutemov --- v2: same as v1, plus Ack from Kirill mm/huge_memory.c | 15 +++++---------- 1 file changed, 5 insertions(+), 10 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 3dee8665c585..ab5ab1a013e1 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2135,7 +2135,6 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, uffd_wp = pmd_uffd_wp(old_pmd); VM_BUG_ON_PAGE(!page_count(page), page); - page_ref_add(page, HPAGE_PMD_NR - 1); /* * Without "freeze", we'll simply split the PMD, propagating the @@ -2155,6 +2154,8 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, anon_exclusive = PageAnon(page) && PageAnonExclusive(page); if (freeze && anon_exclusive && page_try_share_anon_rmap(page)) freeze = false; + if (!freeze) + page_ref_add(page, HPAGE_PMD_NR - 1); } /* @@ -2210,27 +2211,21 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, entry = pte_mksoft_dirty(entry); if (uffd_wp) entry = pte_mkuffd_wp(entry); + page_add_anon_rmap(page + i, vma, addr, false); } pte = pte_offset_map(&_pmd, addr); BUG_ON(!pte_none(*pte)); set_pte_at(mm, addr, pte, entry); - if (!pmd_migration) - page_add_anon_rmap(page + i, vma, addr, false); pte_unmap(pte); } if (!pmd_migration) page_remove_rmap(page, vma, true); + if (freeze) + put_page(page); smp_wmb(); /* make pte visible before pmd */ pmd_populate(mm, pmd, pgtable); - - if (freeze) { - for (i = 0; i < HPAGE_PMD_NR; i++) { - page_remove_rmap(page + i, vma, false); - put_page(page + i); - } - } } void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,