From patchwork Thu Sep 22 22:40:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zach O'Keefe X-Patchwork-Id: 12985905 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63A0DECAAD8 for ; Thu, 22 Sep 2022 22:40:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EB91194000A; Thu, 22 Sep 2022 18:40:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E1D64940007; Thu, 22 Sep 2022 18:40:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C442E94000A; Thu, 22 Sep 2022 18:40:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id B6865940007 for ; Thu, 22 Sep 2022 18:40:53 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 93705AB87F for ; Thu, 22 Sep 2022 22:40:53 +0000 (UTC) X-FDA: 79941192786.06.51DB708 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) by imf09.hostedemail.com (Postfix) with ESMTP id 4A670140002 for ; Thu, 22 Sep 2022 22:40:53 +0000 (UTC) Received: by mail-pf1-f202.google.com with SMTP id q22-20020a62e116000000b005428fb66124so6035604pfh.16 for ; Thu, 22 Sep 2022 15:40:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=eUAUkrxHu4GQgGruUF7dEalZrbHhi5PUx17XipEP8M4=; b=Kcl2tYPwasVx5odrwrwR0bkMEAW3Cq74RZkT7uxl+KiElKu3LQoE4hIUXcP/adHGP7 HivspBLdS3TBrzJr5Bd+XE6jJ1gQIkycUd+wcpqK4H1hnXBkFHbbO5DxB73ZnuIqgX3K bAZ9MOTpclPWIrpoM+1Q1mVVdliSxOD9iPI7y0oCLkWd3tBm1Zgf0aboa71i+TSt+dYV W3gE+54np4fAvjYYmXjSMUMRdlqZgpO4s1txQB2U9RkXc35SOc2hDZGPGa1CI7QVdarf eNHPsmA3VNcLREHgFDCuWFnjM2eDg0F7f3smb/QqEpEOLSdCRyluAhl2ECIumxaw0vTf lo1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=eUAUkrxHu4GQgGruUF7dEalZrbHhi5PUx17XipEP8M4=; b=7pCXq2Fliez6GvQ+QyIlgSHfDjrH4aTlxGQ4PzH0v1vN62vNPaaj18luRIje5YKZBe lA6w2IKnaX4dGcYp22f43Y9OErWo6irNJrjN4HzG5CYDP2IJc+dLpLUkQ4y42aVnkm1a tnXyYzCQ24E2aSoXPv3PZx9vDZ/rOnjmAR0DcwLGYQ+2vSewDlytt+6pNZkGkZU2bjJV lTnLR2p7ZGE+IP37CxasInWhN11ipnJD7oW7nEq4GchErVsq6PjQnwh95c52hnNR7a2d ZdJW2g43ODCfDzFOPaezov8ztYt1nijhi33iPmfei/g5yK62NUtA6X4i2cKRg97cTDJ4 c+Ew== X-Gm-Message-State: ACrzQf2hbwURfxbMmBbJ14mjtT6XPKL89HRI62HSilqTitG8KKg85/yG bMWG9xOs0JOX+Cc0xBUKjQH6FD6Bd60p X-Google-Smtp-Source: AMsMyM4dezoNRKufxLVTgdgSHHY+kCUdh19dMCibhuub7s6xrEb3KuB6lP7fatolqmTcRYFbqem0yAuAHDRI X-Received: from zokeefe3.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1b6]) (user=zokeefe job=sendgmr) by 2002:a17:90a:cd06:b0:203:ae0e:6a21 with SMTP id d6-20020a17090acd0600b00203ae0e6a21mr414304pju.0.1663886451993; Thu, 22 Sep 2022 15:40:51 -0700 (PDT) Date: Thu, 22 Sep 2022 15:40:38 -0700 In-Reply-To: <20220922224046.1143204-1-zokeefe@google.com> Mime-Version: 1.0 References: <20220922224046.1143204-1-zokeefe@google.com> X-Mailer: git-send-email 2.37.3.998.g577e59143f-goog Message-ID: <20220922224046.1143204-3-zokeefe@google.com> Subject: [PATCH mm-unstable v4 02/10] mm/khugepaged: attempt to map file/shmem-backed pte-mapped THPs by pmds From: "Zach O'Keefe" To: Andrew Morton Cc: linux-mm@kvack.org, linux-api@vger.kernel.org, Axel Rasmussen , James Houghton , Hugh Dickins , Yang Shi , Miaohe Lin , David Hildenbrand , David Rientjes , Matthew Wilcox , Pasha Tatashin , Peter Xu , Rongwei Wang , SeongJae Park , Song Liu , Vlastimil Babka , Chris Kennelly , "Kirill A. Shutemov" , Minchan Kim , Patrick Xia ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1663886453; a=rsa-sha256; cv=none; b=R0Kpr/lGUbo3Gbh7xboNun5IzEqG4nQL8xylfp5tXE2F1y9yFFn6vQgAS8hl1GYGb8WdVE WOH+vNAgCx0vOo6du/990AmtcTlexb0Majh0dURJmJJAkpeMpHp54paD0cETCckZFW16af 6GRkYJOfWziMDggMP2/3oBgU5fRnBJA= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Kcl2tYPw; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf09.hostedemail.com: domain of 3c-QsYwcKCBAF40uuvuw44w1u.s421y3AD-220Bqs0.47w@flex--zokeefe.bounces.google.com designates 209.85.210.202 as permitted sender) smtp.mailfrom=3c-QsYwcKCBAF40uuvuw44w1u.s421y3AD-220Bqs0.47w@flex--zokeefe.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1663886453; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=eUAUkrxHu4GQgGruUF7dEalZrbHhi5PUx17XipEP8M4=; b=SS40fTsTgJ7To/s99I5cwp0PG+snArG2URPJrGBKuvdwoRSNwY37bv3aUGlSttH3griKTx aVd3BedDw92O36jVES0pZxYbqPrh4H3SQ8TB7KXRyrWN11Tg/uzp0pC0Eh3NDMlQPM8qvq B/gu8xteprcyVntUAaaWlAVFvky85hg= X-Stat-Signature: jmjsunxe95bbd5jdzkzyukhyn4ecii3i X-Rspamd-Queue-Id: 4A670140002 Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Kcl2tYPw; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf09.hostedemail.com: domain of 3c-QsYwcKCBAF40uuvuw44w1u.s421y3AD-220Bqs0.47w@flex--zokeefe.bounces.google.com designates 209.85.210.202 as permitted sender) smtp.mailfrom=3c-QsYwcKCBAF40uuvuw44w1u.s421y3AD-220Bqs0.47w@flex--zokeefe.bounces.google.com X-Rspamd-Server: rspam05 X-Rspam-User: X-HE-Tag: 1663886453-920950 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The main benefit of THPs are that they can be mapped at the pmd level, increasing the likelihood of TLB hit and spending less cycles in page table walks. pte-mapped hugepages - that is - hugepage-aligned compound pages of order HPAGE_PMD_ORDER mapped by ptes - although being contiguous in physical memory, don't have this advantage. In fact, one could argue they are detrimental to system performance overall since they occupy a precious hugepage-aligned/sized region of physical memory that could otherwise be used more effectively. Additionally, pte-mapped hugepages can be the cheapest memory to collapse for khugepaged since no new hugepage allocation or copying of memory contents is necessary - we only need to update the mapping page tables. In the anonymous collapse path, we are able to collapse pte-mapped hugepages (albeit, perhaps suboptimally), but the file/shmem path makes no effort when compound pages (of any order) are encountered. Identify pte-mapped hugepages in the file/shmem collapse path. The final step of which makes a racy check of the value of the pmd to ensure it maps a pte table. This should be fine, since races that result in false-positive (i.e. attempt collapse even though we shouldn't) will fail later in collapse_pte_mapped_thp() once we actually lock mmap_lock and reinspect the pmd value. Races that result in false-negatives (i.e. where we decide to not attempt collapse, but should have) shouldn't be an issue, since in the worst case, we do nothing - which is what we've done up to this point. We make a similar check in retract_page_tables(). If we do think we've found a pte-mapped hugepgae in khugepaged context, attempt to update page tables mapping this hugepage. Note that these collapses still count towards the /sys/kernel/mm/transparent_hugepage/khugepaged/pages_collapsed counter, and if the pte-mapped hugepage was also mapped into multiple process' address spaces, could be incremented for each page table update. Since we increment the counter when a pte-mapped hugepage is successfully added to the list of to-collapse pte-mapped THPs, it's possible that we never actually update the page table either. This is different from how file/shmem pages_collapsed accounting works today where only a successful page cache update is counted (it's also possible here that no page tables are actually changed). Though it incurs some slop, this is preferred to either not accounting for the event at all, or plumbing through data in struct mm_slot on whether to account for the collapse or not. Also note that work still needs to be done to support arbitrary compound pages, and that this should all be converted to using folios. [shy828301@gmail.com: Spelling mistake, update comment, and add Documentation] Link: https://lore.kernel.org/linux-mm/CAHbLzkpHwZxFzjfX9nxVoRhzup8WMjMfyL6Xiq8mZ9M-N3ombw@mail.gmail.com/ Link: https://lkml.kernel.org/r/20220907144521.3115321-3-zokeefe@google.com Signed-off-by: Zach O'Keefe Reviewed-by: Yang Shi Cc: Axel Rasmussen Cc: Chris Kennelly Cc: David Hildenbrand Cc: David Rientjes Cc: Hugh Dickins Cc: James Houghton Cc: "Kirill A. Shutemov" Cc: Matthew Wilcox Cc: Miaohe Lin Cc: Minchan Kim Cc: Pasha Tatashin Cc: Peter Xu Cc: Rongwei Wang Cc: SeongJae Park Cc: Song Liu Cc: Vlastimil Babka --- Documentation/admin-guide/mm/transhuge.rst | 9 ++- include/trace/events/huge_memory.h | 1 + mm/khugepaged.c | 69 +++++++++++++++++++--- 3 files changed, 71 insertions(+), 8 deletions(-) diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst index 8e3418ec4503..8ee78ec232eb 100644 --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -191,7 +191,14 @@ allocation failure to throttle the next allocation attempt:: /sys/kernel/mm/transparent_hugepage/khugepaged/alloc_sleep_millisecs -The khugepaged progress can be seen in the number of pages collapsed:: +The khugepaged progress can be seen in the number of pages collapsed (note +that this counter may not be an exact count of the number of pages +collapsed, since "collapsed" could mean multiple things: (1) A PTE mapping +being replaced by a PMD mapping, or (2) All 4K physical pages replaced by +one 2M hugepage. Each may happen independently, or together, depending on +the type of memory and the failures that occur. As such, this value should +be interpreted roughly as a sign of progress, and counters in /proc/vmstat +consulted for more accurate accounting):: /sys/kernel/mm/transparent_hugepage/khugepaged/pages_collapsed diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h index 55392bf30a03..fbbb25494d60 100644 --- a/include/trace/events/huge_memory.h +++ b/include/trace/events/huge_memory.h @@ -17,6 +17,7 @@ EM( SCAN_EXCEED_SHARED_PTE, "exceed_shared_pte") \ EM( SCAN_PTE_NON_PRESENT, "pte_non_present") \ EM( SCAN_PTE_UFFD_WP, "pte_uffd_wp") \ + EM( SCAN_PTE_MAPPED_HUGEPAGE, "pte_mapped_hugepage") \ EM( SCAN_PAGE_RO, "no_writable_page") \ EM( SCAN_LACK_REFERENCED_PAGE, "lack_referenced_page") \ EM( SCAN_PAGE_NULL, "page_null") \ diff --git a/mm/khugepaged.c b/mm/khugepaged.c index b3ebe90a66d9..b1e3f83c4eb2 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -35,6 +35,7 @@ enum scan_result { SCAN_EXCEED_SHARED_PTE, SCAN_PTE_NON_PRESENT, SCAN_PTE_UFFD_WP, + SCAN_PTE_MAPPED_HUGEPAGE, SCAN_PAGE_RO, SCAN_LACK_REFERENCED_PAGE, SCAN_PAGE_NULL, @@ -1320,20 +1321,24 @@ static void collect_mm_slot(struct khugepaged_mm_slot *mm_slot) * Notify khugepaged that given addr of the mm is pte-mapped THP. Then * khugepaged should try to collapse the page table. */ -static void khugepaged_add_pte_mapped_thp(struct mm_struct *mm, +static bool khugepaged_add_pte_mapped_thp(struct mm_struct *mm, unsigned long addr) { struct khugepaged_mm_slot *mm_slot; struct mm_slot *slot; + bool ret = false; VM_BUG_ON(addr & ~HPAGE_PMD_MASK); spin_lock(&khugepaged_mm_lock); slot = mm_slot_lookup(mm_slots_hash, mm); mm_slot = mm_slot_entry(slot, struct khugepaged_mm_slot, slot); - if (likely(mm_slot && mm_slot->nr_pte_mapped_thp < MAX_PTE_MAPPED_THP)) + if (likely(mm_slot && mm_slot->nr_pte_mapped_thp < MAX_PTE_MAPPED_THP)) { mm_slot->pte_mapped_thp[mm_slot->nr_pte_mapped_thp++] = addr; + ret = true; + } spin_unlock(&khugepaged_mm_lock); + return ret; } static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *vma, @@ -1370,9 +1375,16 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr) pte_t *start_pte, *pte; pmd_t *pmd; spinlock_t *ptl; - int count = 0; + int count = 0, result = SCAN_FAIL; int i; + mmap_assert_write_locked(mm); + + /* Fast check before locking page if not PMD mapping PTE table */ + result = find_pmd_or_thp_or_none(mm, haddr, &pmd); + if (result != SCAN_SUCCEED) + return; + if (!vma || !vma->vm_file || !range_in_vma(vma, haddr, haddr + HPAGE_PMD_SIZE)) return; @@ -1726,9 +1738,16 @@ static int collapse_file(struct mm_struct *mm, struct file *file, /* * If file was truncated then extended, or hole-punched, before * we locked the first page, then a THP might be there already. + * This will be discovered on the first iteration. */ if (PageTransCompound(page)) { - result = SCAN_PAGE_COMPOUND; + struct page *head = compound_head(page); + + result = compound_order(head) == HPAGE_PMD_ORDER && + head->index == start + /* Maybe PMD-mapped */ + ? SCAN_PTE_MAPPED_HUGEPAGE + : SCAN_PAGE_COMPOUND; goto out_unlock; } @@ -1962,11 +1981,23 @@ static int khugepaged_scan_file(struct mm_struct *mm, struct file *file, } /* - * XXX: khugepaged should compact smaller compound pages + * TODO: khugepaged should compact smaller compound pages * into a PMD sized page */ if (PageTransCompound(page)) { - result = SCAN_PAGE_COMPOUND; + struct page *head = compound_head(page); + + result = compound_order(head) == HPAGE_PMD_ORDER && + head->index == start + /* Maybe PMD-mapped */ + ? SCAN_PTE_MAPPED_HUGEPAGE + : SCAN_PAGE_COMPOUND; + /* + * For SCAN_PTE_MAPPED_HUGEPAGE, further processing + * by the caller won't touch the page cache, and so + * it's safe to skip LRU and refcount checks before + * returning. + */ break; } @@ -2026,6 +2057,12 @@ static int khugepaged_scan_file(struct mm_struct *mm, struct file *file, static void khugepaged_collapse_pte_mapped_thps(struct khugepaged_mm_slot *mm_slot) { } + +static bool khugepaged_add_pte_mapped_thp(struct mm_struct *mm, + unsigned long addr) +{ + return false; +} #endif static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, @@ -2118,8 +2155,26 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result, &mmap_locked, cc); } - if (*result == SCAN_SUCCEED) + switch (*result) { + case SCAN_PTE_MAPPED_HUGEPAGE: { + pmd_t *pmd; + + *result = find_pmd_or_thp_or_none(mm, + khugepaged_scan.address, + &pmd); + if (*result != SCAN_SUCCEED) + break; + if (!khugepaged_add_pte_mapped_thp(mm, + khugepaged_scan.address)) + break; + } fallthrough; + case SCAN_SUCCEED: ++khugepaged_pages_collapsed; + break; + default: + break; + } + /* move to next address */ khugepaged_scan.address += HPAGE_PMD_SIZE; progress += HPAGE_PMD_NR;