From patchwork Fri Nov 24 13:26:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13467666 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A1DDC61DF4 for ; Fri, 24 Nov 2023 13:27:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 12A528D0081; Fri, 24 Nov 2023 08:27:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 08BAA8D006E; Fri, 24 Nov 2023 08:27:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E208C8D0081; Fri, 24 Nov 2023 08:27:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id C96118D006E for ; Fri, 24 Nov 2023 08:27:24 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id A48AEA0643 for ; Fri, 24 Nov 2023 13:27:24 +0000 (UTC) X-FDA: 81492924408.18.F561411 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf14.hostedemail.com (Postfix) with ESMTP id DF8DD10000A for ; Fri, 24 Nov 2023 13:27:22 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=d7EWUSPs; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf14.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700832442; a=rsa-sha256; cv=none; b=7/dsQ57uRE3GHEimSDkFLRKk6kiQBmW9eMORyVnZolbCYJiJkdA2f4pQ4NAbo2KqbcwBjA Aqsp6KJs4LCPJMwf+FGd2Mutw53eY0kTXSepNFwJvlb0LfRi/s198vNR2qnx3RHX02WkAq +Mf0YjrRnw73Muz8Y74M0NfFQloKvcI= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=d7EWUSPs; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf14.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700832442; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vCiq1mHEXhReKPGNK7ma5q/JWzLE5up94nYDqOWMaw0=; b=HGTaes5DckR9iEAfIah5w95TU8LtG5xOyNANDUbdGd/a5R6VJAj/69pMeFa/bYuUcAPIzg c4fWs3RH2QYwmtfzw806buovLrzpyZ6M8VfKM8602iojf4++Uzkygom0qL/DqmZ6DkSpmF ba/j2/5H0nJEUp/qAeRBaj0tmD8EdSc= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1700832442; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vCiq1mHEXhReKPGNK7ma5q/JWzLE5up94nYDqOWMaw0=; b=d7EWUSPsWGhkf2ycaVid5Yl+kZTACJr0EQKXwcLfO/F+mRmJ/r2YbDkXE3vMG7WHzIu0ro QTvngX7bjHq/kOmAfj1DRpqNgtUqrFwDSlrbMbpFzEwi9MVjE3nstj0KdlYyvmuBK4axmu lq0hBt7/HHqPjgsqV0CpMSIIQzw6YSc= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-654-EfryieQzPKuzYd766I-Umw-1; Fri, 24 Nov 2023 08:27:19 -0500 X-MC-Unique: EfryieQzPKuzYd766I-Umw-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 805CE1C05142; Fri, 24 Nov 2023 13:27:18 +0000 (UTC) Received: from t14s.fritz.box (unknown [10.39.194.71]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0717E2166B2A; Fri, 24 Nov 2023 13:27:14 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , Linus Torvalds , Ryan Roberts , Matthew Wilcox , Hugh Dickins , Yin Fengwei , Yang Shi , Ying Huang , Zi Yan , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , "Paul E. McKenney" Subject: [PATCH WIP v1 13/20] mm/huge_memory: batch rmap operations in __split_huge_pmd_locked() Date: Fri, 24 Nov 2023 14:26:18 +0100 Message-ID: <20231124132626.235350-14-david@redhat.com> In-Reply-To: <20231124132626.235350-1-david@redhat.com> References: <20231124132626.235350-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.6 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: DF8DD10000A X-Stat-Signature: jpboqoibuh76959no3j7zou8s5otg1ru X-HE-Tag: 1700832442-188710 X-HE-Meta: U2FsdGVkX18g4yzXK37VdRFVzQSJWasA6xdS58AzyhT6+IlfmiVWZWfd18OlhgFek8UG/OPHYflmHpT76/b17a0hY28C+apOL1gytHRmrgXLMnBcAG3v2mfzkPPo8s3nghcjt01oQrx8u9r5EA5DBQdB8nE/T1kvTnDqWBV2FEIdvrz629cUP0yemiuDIuYVKQawIjr4v9UAu3a9XzM4dI9SIkabPVrec+B9jATuMFlYJgNdFAL2Rlbo7mK/ohfMeXefJWduFVXi1Jj/HT69pLPFt4zxpZEkQiHScFNiSspX29EZxGhyxoQgdHHWZSFuZrghBqIIaWiCvj/9p5p3T1K9FGamQBuUjJUFYht1IXTCk/hRLQq83nomSHEuN4I9lksw2Mxxsbzl4TvOmw5fFo8RYlNj2X0kWuMKbVWjMWsVw4tfyo0KMmpAHXE5ZuOT9B7fwiVOSFKn0DHNGR9uGyWPmX61GSH+FJ91T+p4GkNxUZIa8kl9R9dVbuaHjG/v5Wc3MQ4PrNGk2tmUNVzjfe74jaFRG807tZBf2N6h4OxhI1PnQvMJgYWJdLycGkOK7DDW11ipL+sXR7i/DhIVUQ1siTEpcyFVJM5JLgFzUjZ4rM3o+q9Wed7b6p+p/jYTFUvx3t+vFS5V9Exu47y7Y/MhvT0ZRuHr6H3vGpMHVPNV7OPOKsbJUA35fPiQwpXrt6hRVglgtEiKJ/QDX5W+g0rE9OECbOUYmh6UWrCzYl9URtaQtAVOAsiiNAyeWprU5bHsShv0mJ8mJ7f7p1L7vAuHmn5m5poXxmGO5ysNyPw6Gb4RAejjO3+RpCUvmdTh8XrPg2aLcUlUK1gtTlV5PmjS7nNL2P5ibUh0KNMOszWVAAs8WKgJUpak7c3Y21zFquc5MdvT8TS0vmn8a0mKnYq7UFE+Z7SpL4ysH9PP1uYIu/ziSyd0gVZzNInrV99n3KLOaZ0ShX77SjmXeCR ePU8dopm Ouq06Q58XTNjR1L+SVlVWzVzlI/8C3caTsWAnFoD/LylQWvgrl95CapfZtJR+OkqHsBq5S8OZv+WemCIWkN6LsT2HnL/JkiAg+35fuzG42LQV6MAijUfunn5NSlqgnZc0PmP+5dsottoucNPyPkPSOBeITw8mXxw5xu0IeO4P9PjkoCSHSORQhmXfAUmRASuD+YBoSH11jee+Lz9uizyR85zB+BiEgCox9K/1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's batch the rmap operations, as a preparation to making individual page_add_anon_rmap() calls more expensive. While at it, use more folio operations (but only in the code branch we're touching), use VM_WARN_ON_FOLIO(), and pass RMAP_COMPOUND instead of manually setting PageAnonExclusive. We should never see non-anon pages on that branch: otherwise, the existing page_add_anon_rmap() call would have been flawed already. Signed-off-by: David Hildenbrand --- mm/huge_memory.c | 23 +++++++++++++++-------- 1 file changed, 15 insertions(+), 8 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index fd7251923557..f47971d1afbf 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2100,6 +2100,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, unsigned long haddr, bool freeze) { struct mm_struct *mm = vma->vm_mm; + struct folio *folio; struct page *page; pgtable_t pgtable; pmd_t old_pmd, _pmd; @@ -2195,16 +2196,18 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, uffd_wp = pmd_swp_uffd_wp(old_pmd); } else { page = pmd_page(old_pmd); + folio = page_folio(page); if (pmd_dirty(old_pmd)) { dirty = true; - SetPageDirty(page); + folio_set_dirty(folio); } write = pmd_write(old_pmd); young = pmd_young(old_pmd); soft_dirty = pmd_soft_dirty(old_pmd); uffd_wp = pmd_uffd_wp(old_pmd); - VM_BUG_ON_PAGE(!page_count(page), page); + VM_WARN_ON_FOLIO(!folio_ref_count(folio), folio); + VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio); /* * Without "freeze", we'll simply split the PMD, propagating the @@ -2221,11 +2224,18 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, * * See page_try_share_anon_rmap(): invalidate PMD first. */ - anon_exclusive = PageAnon(page) && PageAnonExclusive(page); + anon_exclusive = PageAnonExclusive(page); if (freeze && anon_exclusive && page_try_share_anon_rmap(page)) freeze = false; - if (!freeze) - page_ref_add(page, HPAGE_PMD_NR - 1); + if (!freeze) { + rmap_t rmap_flags = RMAP_NONE; + + folio_ref_add(folio, HPAGE_PMD_NR - 1); + if (anon_exclusive) + rmap_flags = RMAP_EXCLUSIVE; + folio_add_anon_rmap_range(folio, page, HPAGE_PMD_NR, + vma, haddr, rmap_flags); + } } /* @@ -2268,8 +2278,6 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, entry = mk_pte(page + i, READ_ONCE(vma->vm_page_prot)); if (write) entry = pte_mkwrite(entry, vma); - if (anon_exclusive) - SetPageAnonExclusive(page + i); if (!young) entry = pte_mkold(entry); /* NOTE: this may set soft-dirty too on some archs */ @@ -2279,7 +2287,6 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, entry = pte_mksoft_dirty(entry); if (uffd_wp) entry = pte_mkuffd_wp(entry); - page_add_anon_rmap(page + i, vma, addr, RMAP_NONE); } VM_BUG_ON(!pte_none(ptep_get(pte))); set_pte_at(mm, addr, pte, entry);