From patchwork Wed Sep 30 22:21:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 11810421 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F3BBD112C for ; Wed, 30 Sep 2020 22:22:33 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A8D1520719 for ; Wed, 30 Sep 2020 22:22:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="AIPrEynw"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="uc1lC+BR" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A8D1520719 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:From:Subject:References:Mime-Version:Message-Id: In-Reply-To:Date:Reply-To:To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=3oL1gX81aggSS9ez3uSNSXNxR9nsv9zVuDA12PI0iko=; b=AIPrEynwmNplz1zvp5bfT71LO AT0XQ9PJUUC5TpkcVAJkg10aA/MTgXonH3SEpGyf86skL5QSE8ymZnsOVC3UBawDZNOSu/FGQ1Cd5 Cy4U6t0Spsw/RbDbm7T4pH4vNVsjb1CeiAPsTLkEc4mIjDDghkPglOPQr3isv56Z/4e3OVrlFJA5s CwNBIky5++/5vB8nlQgYiLlg7wfNaq++3Eja53Bkak18Fe9ZVhOzgS5F7i/3az9BGcVq8520L5P30 21EccdS1uhMWMIbxmfrkFlZu8YMRNKoiEawo/DkgWNTSSjR+DBLVU6nVPBgDR1AP23p7jqh3zU6RJ IHo6JT8lw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kNkUM-0006Ir-CN; Wed, 30 Sep 2020 22:22:18 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kNkUI-0006EZ-OQ for linux-arm-kernel@lists.infradead.org; Wed, 30 Sep 2020 22:22:16 +0000 Received: by mail-yb1-xb49.google.com with SMTP id r9so3166762ybd.20 for ; Wed, 30 Sep 2020 15:22:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:cc:content-transfer-encoding; bh=5bHwRAkIFgRUKgngW4gcP068wEdPRj/cA85ER5ge5+w=; b=uc1lC+BRciNCCenfHEg9UpdZN5PsHA6dts/eMegcTHFRbNmxAt+14yp2sd85qBnhWS /iQ6LFsh4BgYcnuYARjFzrcEz/ZOnAaRHA8vceh6iMmgL2CcCA1ShQXN6A0EkeRzto87 XP2ahfz3G1rWtHeMk2Iie02Bh5Bh0tv3f3+acB7crNXPg47WZEE6HxKCeN+Ddv1yICAG sReM4qnVO2nhs3CV0JpJOlQduy3bOPn+tHSdXaPjkP+C6Geq901I1kmdC3Y2kNiZU8q4 1q6YqqR+HGbk++rIzTfi4ZkgncraaFN0QqjeC/1v+W4BQT1+gF6vukHux4c1ngvQ68DW nFBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:cc:content-transfer-encoding; bh=5bHwRAkIFgRUKgngW4gcP068wEdPRj/cA85ER5ge5+w=; b=AsRC+KYCVL6e5BaKr0xbeipxAssVS1P+S7zaB/zNLMxX6t4j1qOFhTH96eokP+lpQc XmDdyOkDOBEyHXx5atGHQ1S9sw8IxZwi8eBq69yGetl2qs7WoYggipeKcja4zccrS8HW ngGKB/BS+mSmlmC2sqzGGurK47PIXgjBhZHPH5w82cuaS3jE6secz+2DpCZr6xMB4h+V UNteT1xow50HIged9DSorn7+ZorL43dxXMRAOQkSMg0iz2OliUmHHI+kWyG/R10HNSjC iREPsxavaogLvfhKoKuHyILPdx8bQ2S58BZTZhkfy2fbk1NiNgvaTVMizD5+pvRdRKff kzFw== X-Gm-Message-State: AOAM532YIO1vzOQ1lh+9NcN3cmUU6ccdcN8TVngdEm593MZ0OfHBZkQi SD+1JZ6+ctw3e3cmxyfcWGOxhl0HV7e0ykztrw== X-Google-Smtp-Source: ABdhPJwBYpAmlRALKHYyir3OblZbIEhsgUn0acpHu8aRfQEb5kag4nDCQqATpwKZEagaltStuNxSB3SptnIGiDEZdw== X-Received: from kaleshsingh.c.googlers.com ([fda3:e722:ac3:10:14:4d90:c0a8:2145]) (user=kaleshsingh job=sendgmr) by 2002:a25:5d8:: with SMTP id 207mr6822598ybf.444.1601504527236; Wed, 30 Sep 2020 15:22:07 -0700 (PDT) Date: Wed, 30 Sep 2020 22:21:20 +0000 In-Reply-To: <20200930222130.4175584-1-kaleshsingh@google.com> Message-Id: <20200930222130.4175584-4-kaleshsingh@google.com> Mime-Version: 1.0 References: <20200930222130.4175584-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.28.0.709.gb0816b6eb0-goog Subject: [PATCH 3/5] mm: Speedup mremap on 1GB or larger regions From: Kalesh Singh X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200930_182214_824600_5558674A X-CRM114-Status: GOOD ( 27.11 ) X-Spam-Score: -7.0 (-------) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-7.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:b49 listed in] [list.dnswl.org] 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.0 SPF_PASS SPF: sender matches SPF record 1.2 MISSING_HEADERS Missing To: header 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.5 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: joelaf@google.com, Mark Rutland , Gavin Shan , Brian Geffon , Peter Zijlstra , Catalin Marinas , kaleshsingh@google.com, Ram Pai , linux-mm@kvack.org, Dave Hansen , Will Deacon , lokeshgidra@google.com, linux-kselftest@vger.kernel.org, "H. Peter Anvin" , Christian Brauner , Shuah Khan , Mina Almasry , Jia He , Arnd Bergmann , "Aneesh Kumar K.V" , Masahiro Yamada , x86@kernel.org, Krzysztof Kozlowski , Ingo Molnar , Sami Tolvanen , kernel-team@android.com, Hassan Naveed , Masami Hiramatsu , Ralph Campbell , Kees Cook , minchan@google.com, Zhenyu Ye , John Hubbard , Frederic Weisbecker , Mark Brown , Borislav Petkov , Thomas Gleixner , surenb@google.com, linux-arm-kernel@lists.infradead.org, Chris von Recklinghausen , William Kucharski , Stephen Boyd , SeongJae Park , linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Andrew Morton , Mike Rapoport , Sandipan Das Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Android needs to move large memory regions for garbage collection. Optimize mremap for >= 1GB-sized regions by moving at the PUD/PGD level if the source and destination addresses are PUD-aligned. For CONFIG_PGTABLE_LEVELS == 3, moving at the PUD level in effect moves PGD entries, since the PUD entry is “folded back” onto the PGD entry. Add HAVE_MOVE_PUD so that architectures where moving at the PUD level isn't supported/tested can turn this off by not selecting the config. Signed-off-by: Kalesh Singh --- arch/Kconfig | 7 + arch/arm64/include/asm/pgtable.h | 1 + mm/mremap.c | 211 ++++++++++++++++++++++++++----- 3 files changed, 189 insertions(+), 30 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index af14a567b493..5eabaa00bf9b 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -602,6 +602,13 @@ config HAVE_IRQ_TIME_ACCOUNTING Archs need to ensure they use a high enough resolution clock to support irq time accounting and then call enable_sched_clock_irqtime(). +config HAVE_MOVE_PUD + bool + help + Architectures that select this are able to move page tables at the + PUD level. If there are only 3 page table levels, the move effectively + happens at the PGD level. + config HAVE_MOVE_PMD bool help diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index d5d3fbe73953..8848125e3024 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -415,6 +415,7 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd) #define pfn_pud(pfn,prot) __pud(__phys_to_pud_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot)) #define set_pmd_at(mm, addr, pmdp, pmd) set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd)) +#define set_pud_at(mm, addr, pudp, pud) set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud)) #define __p4d_to_phys(p4d) __pte_to_phys(p4d_pte(p4d)) #define __phys_to_p4d_val(phys) __phys_to_pte_val(phys) diff --git a/mm/mremap.c b/mm/mremap.c index 138abbae4f75..a5a1440bd366 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -249,14 +249,167 @@ static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, return true; } +#else +static inline bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, + unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd) +{ + return false; +} #endif +#ifdef CONFIG_HAVE_MOVE_PUD +static pud_t *get_old_pud(struct mm_struct *mm, unsigned long addr) +{ + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + + pgd = pgd_offset(mm, addr); + if (pgd_none_or_clear_bad(pgd)) + return NULL; + + p4d = p4d_offset(pgd, addr); + if (p4d_none_or_clear_bad(p4d)) + return NULL; + + pud = pud_offset(p4d, addr); + if (pud_none_or_clear_bad(pud)) + return NULL; + + return pud; +} + +static pud_t *alloc_new_pud(struct mm_struct *mm, struct vm_area_struct *vma, + unsigned long addr) +{ + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + + pgd = pgd_offset(mm, addr); + p4d = p4d_alloc(mm, pgd, addr); + if (!p4d) + return NULL; + pud = pud_alloc(mm, p4d, addr); + if (!pud) + return NULL; + + return pud; +} + +static bool move_normal_pud(struct vm_area_struct *vma, unsigned long old_addr, + unsigned long new_addr, pud_t *old_pud, pud_t *new_pud) +{ + spinlock_t *old_ptl, *new_ptl; + struct mm_struct *mm = vma->vm_mm; + pud_t pud; + + /* + * The destination pud shouldn't be established, free_pgtables() + * should have released it. + */ + if (WARN_ON_ONCE(!pud_none(*new_pud))) + return false; + + /* + * We don't have to worry about the ordering of src and dst + * ptlocks because exclusive mmap_lock prevents deadlock. + */ + old_ptl = pud_lock(vma->vm_mm, old_pud); + new_ptl = pud_lockptr(mm, new_pud); + if (new_ptl != old_ptl) + spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); + + /* Clear the pud */ + pud = *old_pud; + pud_clear(old_pud); + + VM_BUG_ON(!pud_none(*new_pud)); + + /* Set the new pud */ + set_pud_at(mm, new_addr, new_pud, pud); + flush_tlb_range(vma, old_addr, old_addr + PUD_SIZE); + if (new_ptl != old_ptl) + spin_unlock(new_ptl); + spin_unlock(old_ptl); + + return true; +} +#else +static inline bool move_normal_pud(struct vm_area_struct *vma, unsigned long old_addr, + unsigned long new_addr, pud_t *old_pud, pud_t *new_pud) +{ + return false; +} +#endif + +enum pgt_entry { + NORMAL_PMD, + HPAGE_PMD, + NORMAL_PUD, +}; + +/* + * Returns an extent of the corresponding size for the pgt_entry specified if valid. + * Else returns a smaller extent bounded by the end of the source and destination + * pgt_entry. Returns 0 if an invalid pgt_entry is specified. + */ +static unsigned long get_extent(enum pgt_entry entry, unsigned long old_addr, + unsigned long old_end, unsigned long new_addr) +{ + unsigned long next, extent, mask, size; + + if (entry == NORMAL_PMD || entry == HPAGE_PMD) { + mask = PMD_MASK; + size = PMD_SIZE; + } else if (entry == NORMAL_PUD) { + mask = PUD_MASK; + size = PUD_SIZE; + } else + return 0; + + next = (old_addr + size) & mask; + /* even if next overflowed, extent below will be ok */ + extent = (next > old_end) ? old_end - old_addr : next - old_addr; + next = (new_addr + size) & mask; + if (extent > next - new_addr) + extent = next - new_addr; + return extent; +} + +/* + * Attempts to speedup the move by moving entry at the level corresponding to + * pgt_entry. Returns true if the move was successful, else false. + */ +static bool move_pgt_entry(enum pgt_entry entry, struct vm_area_struct *vma, + unsigned long old_addr, unsigned long new_addr, void *old_entry, + void *new_entry, bool need_rmap_locks) +{ + bool moved = false; + + /* See comment in move_ptes() */ + if (need_rmap_locks) + take_rmap_locks(vma); + if (entry == NORMAL_PMD) + moved = move_normal_pmd(vma, old_addr, new_addr, old_entry, new_entry); + else if (entry == NORMAL_PUD) + moved = move_normal_pud(vma, old_addr, new_addr, old_entry, new_entry); + else if (entry == HPAGE_PMD) + moved = move_huge_pmd(vma, old_addr, new_addr, old_entry, new_entry); + else + WARN_ON_ONCE(1); + if (need_rmap_locks) + drop_rmap_locks(vma); + + return moved; +} + unsigned long move_page_tables(struct vm_area_struct *vma, unsigned long old_addr, struct vm_area_struct *new_vma, unsigned long new_addr, unsigned long len, bool need_rmap_locks) { - unsigned long extent, next, old_end; + unsigned long extent, old_end; struct mmu_notifier_range range; pmd_t *old_pmd, *new_pmd; @@ -269,14 +422,27 @@ unsigned long move_page_tables(struct vm_area_struct *vma, for (; old_addr < old_end; old_addr += extent, new_addr += extent) { cond_resched(); - next = (old_addr + PMD_SIZE) & PMD_MASK; - /* even if next overflowed, extent below will be ok */ - extent = next - old_addr; - if (extent > old_end - old_addr) - extent = old_end - old_addr; - next = (new_addr + PMD_SIZE) & PMD_MASK; - if (extent > next - new_addr) - extent = next - new_addr; +#ifdef CONFIG_HAVE_MOVE_PUD + /* + * If extent is PUD-sized try to speed up the move by moving at the + * PUD level if possible. + */ + extent = get_extent(NORMAL_PUD, old_addr, old_end, new_addr); + if (extent == PUD_SIZE) { + pud_t *old_pud, *new_pud; + + old_pud = get_old_pud(vma->vm_mm, old_addr); + if (!old_pud) + continue; + new_pud = alloc_new_pud(vma->vm_mm, vma, new_addr); + if (!new_pud) + break; + if (move_pgt_entry(NORMAL_PUD, vma, old_addr, new_addr, + old_pud, new_pud, need_rmap_locks)) + continue; + } +#endif + extent = get_extent(NORMAL_PMD, old_addr, old_end, new_addr); old_pmd = get_old_pmd(vma->vm_mm, old_addr); if (!old_pmd) continue; @@ -284,18 +450,10 @@ unsigned long move_page_tables(struct vm_area_struct *vma, if (!new_pmd) break; if (is_swap_pmd(*old_pmd) || pmd_trans_huge(*old_pmd) || pmd_devmap(*old_pmd)) { - if (extent == HPAGE_PMD_SIZE) { - bool moved; - /* See comment in move_ptes() */ - if (need_rmap_locks) - take_rmap_locks(vma); - moved = move_huge_pmd(vma, old_addr, new_addr, - old_pmd, new_pmd); - if (need_rmap_locks) - drop_rmap_locks(vma); - if (moved) - continue; - } + if (extent == HPAGE_PMD_SIZE && + move_pgt_entry(HPAGE_PMD, vma, old_addr, new_addr, old_pmd, + new_pmd, need_rmap_locks)) + continue; split_huge_pmd(vma, old_pmd, old_addr); if (pmd_trans_unstable(old_pmd)) continue; @@ -305,15 +463,8 @@ unsigned long move_page_tables(struct vm_area_struct *vma, * If the extent is PMD-sized, try to speed the move by * moving at the PMD level if possible. */ - bool moved; - - if (need_rmap_locks) - take_rmap_locks(vma); - moved = move_normal_pmd(vma, old_addr, new_addr, - old_pmd, new_pmd); - if (need_rmap_locks) - drop_rmap_locks(vma); - if (moved) + if (move_pgt_entry(NORMAL_PMD, vma, old_addr, new_addr, old_pmd, + new_pmd, need_rmap_locks)) continue; #endif }