From patchwork Fri Oct 2 16:20:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 11813959 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 90A3A139A for ; Fri, 2 Oct 2020 16:22:10 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3CE02206FA for ; Fri, 2 Oct 2020 16:22:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="mP8pDMBf"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="Ufd1QrF9" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3CE02206FA Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:From:Subject:References:Mime-Version:Message-Id: In-Reply-To:Date:Reply-To:To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=1PAri4lerJTsmKB/875UyzeS13hqvA++L+P9zFLAKGM=; b=mP8pDMBfh3vKTbLV1yOjIInNl qrlO9TfGUb5E3UPV+v6LpolMmO0bbd4qtvrNvLyZm1PC456+TehVSJ4bfhqpLGOmEdwKrGlB3kmUA QvXUVHkT+xGrfjugjWZPFmO5CuxzZTytBHORwLWvsl38S0MqivZ24A+bCxKiLoYd0eXe6ivNkFAX3 SlsiDeLCCFIs0ejQXjhhe3tGtUIFxvm06yOAfbOMG9vi2AxSxs4I3yUyzCGtJjDBx78bFqSjgQCGd MonUlbJaZJkz5qrYzqovLVOgX2ZpSGa4pu4bBpShST65jiUi5iKBFOO31IwBXSCpENeeE2i8OEQTg 2NKdYSwqQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kONol-0002hP-Bv; Fri, 02 Oct 2020 16:21:59 +0000 Received: from mail-qt1-x849.google.com ([2607:f8b0:4864:20::849]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kONoh-0002ep-1e for linux-arm-kernel@lists.infradead.org; Fri, 02 Oct 2020 16:21:56 +0000 Received: by mail-qt1-x849.google.com with SMTP id o13so1376321qtl.6 for ; Fri, 02 Oct 2020 09:21:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:cc:content-transfer-encoding; bh=I26gE7RK9ILKrxLTmmwKYo9H+xxwOGRhO/1HGYXougo=; b=Ufd1QrF95drosUZdcSt4pt8x9o20oE5gS7M+m92HNSr5FhOhplzPKi1EKI411U9TLo 5SHDbKsUFPpVINF0auMkvh0n0Dtwsw+MaXQi4IYLQW24BiC3livamhQ/dR3QohP+YRUv /bBASK+tzyyn8akUIREStNWvRhV6/Y9ioigmtWfmYPnABMiDEj2wgfKMqCgzEoUdSIbn eUAlh7PKwqiEx15C/1sLIxz8GqQOu3ex46oiba/Wq+jrIUO52RAfMrfKjEuuJ2dLeUkb Zl06/AVEALyEVFrT+Q649JsCMwTL6Weiqqrn0OjbvDOTaaln1U995eZB7bItEr+XlyE6 5Nww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:cc:content-transfer-encoding; bh=I26gE7RK9ILKrxLTmmwKYo9H+xxwOGRhO/1HGYXougo=; b=F+NdRIavu7LkskpFD0Y9cED6fzJJSZoNo6zgg/duAX9q0FQJfgDCZ+Agc5BmmXoIY/ C8e+4wG1JSKZOAPzTWDA9c/YeAjRHXMKM2TcqJ6Wuw4yvMqenPXPUeK/KgyBVOXY9GSM JbphiROWNtJcLoqqo6MkEb6Azaz+YaaQdnBV/haqftvaGoMGf+zxGYx9y7K1ogIInm07 C5mRkdigIvNsfn5diskxeTfl/jNqMqrcnorRrT+Jdqd898aOcG4+Qeuu942lg8ITzrXs NW1wFYZ7khu9iyOe9vM9QP645cbjgZteHExv1oMOQa0nCe158ZZqV3dKCu04eD5IVh9X f/7g== X-Gm-Message-State: AOAM533vPt++KUaBfXpvkrQGSTIMsgga41BJOe6CaTFdUmBhn3DBzn99 UGWjbXB3jQZunNaayooxJ0kr3Gal/IOkVLXliQ== X-Google-Smtp-Source: ABdhPJykXtAYCiJ+5f30jiX2izinlG5HRyfaLfY9GZ7qEIRpP6fH+mi3w22eaAhavH5xmtsIqpl4qVNUYo0OyZpVEw== X-Received: from kaleshsingh.c.googlers.com ([fda3:e722:ac3:10:14:4d90:c0a8:2145]) (user=kaleshsingh job=sendgmr) by 2002:a0c:f48e:: with SMTP id i14mr3014041qvm.9.1601655711956; Fri, 02 Oct 2020 09:21:51 -0700 (PDT) Date: Fri, 2 Oct 2020 16:20:48 +0000 In-Reply-To: <20201002162101.665549-1-kaleshsingh@google.com> Message-Id: <20201002162101.665549-4-kaleshsingh@google.com> Mime-Version: 1.0 References: <20201002162101.665549-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.28.0.806.g8561365e88-goog Subject: [PATCH v2 3/6] mm: Speedup mremap on 1GB or larger regions From: Kalesh Singh X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201002_122155_116387_D650434E X-CRM114-Status: GOOD ( 28.86 ) X-Spam-Score: -6.5 (------) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-6.5 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:849 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 1.2 MISSING_HEADERS Missing To: header -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: joelaf@google.com, Mark Rutland , Gavin Shan , Brian Geffon , Peter Zijlstra , Catalin Marinas , kaleshsingh@google.com, Ram Pai , linux-mm@kvack.org, William Kucharski , Will Deacon , lokeshgidra@google.com, linux-kselftest@vger.kernel.org, "H. Peter Anvin" , Christian Brauner , Shuah Khan , Mina Almasry , Jia He , kernel test robot , "Aneesh Kumar K.V" , Masahiro Yamada , x86@kernel.org, Krzysztof Kozlowski , Ingo Molnar , Sami Tolvanen , kernel-team@android.com, Hassan Naveed , Masami Hiramatsu , Ralph Campbell , Kees Cook , minchan@google.com, Zhenyu Ye , John Hubbard , Frederic Weisbecker , Borislav Petkov , Thomas Gleixner , surenb@google.com, linux-arm-kernel@lists.infradead.org, Chris von Recklinghausen , Dave Hansen , "Kirill A. Shutemov" , SeongJae Park , linux-kernel@vger.kernel.org, Arnd Bergmann , Thiago Jung Bauermann , Andrew Morton , Mike Rapoport , Sandipan Das Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Android needs to move large memory regions for garbage collection. The GC requires moving physical pages of multi-gigabyte heap using mremap. During this move, the application threads have to be paused for correctness. It is critical to keep this pause as short as possible to avoid jitters during user interaction. Optimize mremap for >= 1GB-sized regions by moving at the PUD/PGD level if the source and destination addresses are PUD-aligned. For CONFIG_PGTABLE_LEVELS == 3, moving at the PUD level in effect moves PGD entries, since the PUD entry is “folded back” onto the PGD entry. Add HAVE_MOVE_PUD so that architectures where moving at the PUD level isn't supported/tested can turn this off by not selecting the config. Fix build test error from v1 of this series reported by kernel test robot in [1]. [1] https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org/thread/CKPGL4FH4NG7TGH2CVYX2UX76L25BTA3/ Signed-off-by: Kalesh Singh Reported-by: kernel test robot --- Changes in v2: - Update commit message with description of Android GC's use case. - Move set_pud_at() to a separate patch. - Use switch() instead of ifs in move_pgt_entry() - Fix build test error reported by kernel test robot on x86_64 in [1]. Guard move_huge_pmd() with IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE), since this section doesn't get optimized out in the kernel test robot's build test when HAVE_MOVE_PUD is enabled. - Keep WARN_ON_ONCE(1) instead of BUILD_BUG() for the aforementioned reason. arch/Kconfig | 7 ++ mm/mremap.c | 220 ++++++++++++++++++++++++++++++++++++++++++++------- 2 files changed, 197 insertions(+), 30 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index af14a567b493..5eabaa00bf9b 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -602,6 +602,13 @@ config HAVE_IRQ_TIME_ACCOUNTING Archs need to ensure they use a high enough resolution clock to support irq time accounting and then call enable_sched_clock_irqtime(). +config HAVE_MOVE_PUD + bool + help + Architectures that select this are able to move page tables at the + PUD level. If there are only 3 page table levels, the move effectively + happens at the PGD level. + config HAVE_MOVE_PMD bool help diff --git a/mm/mremap.c b/mm/mremap.c index 138abbae4f75..c1d6ab667d70 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -249,14 +249,176 @@ static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, return true; } +#else +static inline bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, + unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd) +{ + return false; +} #endif +#ifdef CONFIG_HAVE_MOVE_PUD +static pud_t *get_old_pud(struct mm_struct *mm, unsigned long addr) +{ + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + + pgd = pgd_offset(mm, addr); + if (pgd_none_or_clear_bad(pgd)) + return NULL; + + p4d = p4d_offset(pgd, addr); + if (p4d_none_or_clear_bad(p4d)) + return NULL; + + pud = pud_offset(p4d, addr); + if (pud_none_or_clear_bad(pud)) + return NULL; + + return pud; +} + +static pud_t *alloc_new_pud(struct mm_struct *mm, struct vm_area_struct *vma, + unsigned long addr) +{ + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + + pgd = pgd_offset(mm, addr); + p4d = p4d_alloc(mm, pgd, addr); + if (!p4d) + return NULL; + pud = pud_alloc(mm, p4d, addr); + if (!pud) + return NULL; + + return pud; +} + +static bool move_normal_pud(struct vm_area_struct *vma, unsigned long old_addr, + unsigned long new_addr, pud_t *old_pud, pud_t *new_pud) +{ + spinlock_t *old_ptl, *new_ptl; + struct mm_struct *mm = vma->vm_mm; + pud_t pud; + + /* + * The destination pud shouldn't be established, free_pgtables() + * should have released it. + */ + if (WARN_ON_ONCE(!pud_none(*new_pud))) + return false; + + /* + * We don't have to worry about the ordering of src and dst + * ptlocks because exclusive mmap_lock prevents deadlock. + */ + old_ptl = pud_lock(vma->vm_mm, old_pud); + new_ptl = pud_lockptr(mm, new_pud); + if (new_ptl != old_ptl) + spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); + + /* Clear the pud */ + pud = *old_pud; + pud_clear(old_pud); + + VM_BUG_ON(!pud_none(*new_pud)); + + /* Set the new pud */ + set_pud_at(mm, new_addr, new_pud, pud); + flush_tlb_range(vma, old_addr, old_addr + PUD_SIZE); + if (new_ptl != old_ptl) + spin_unlock(new_ptl); + spin_unlock(old_ptl); + + return true; +} +#else +static inline bool move_normal_pud(struct vm_area_struct *vma, unsigned long old_addr, + unsigned long new_addr, pud_t *old_pud, pud_t *new_pud) +{ + return false; +} +#endif + +enum pgt_entry { + NORMAL_PMD, + HPAGE_PMD, + NORMAL_PUD, +}; + +/* + * Returns an extent of the corresponding size for the pgt_entry specified if valid. + * Else returns a smaller extent bounded by the end of the source and destination + * pgt_entry. Returns 0 if an invalid pgt_entry is specified. + */ +static unsigned long get_extent(enum pgt_entry entry, unsigned long old_addr, + unsigned long old_end, unsigned long new_addr) +{ + unsigned long next, extent, mask, size; + + if (entry == NORMAL_PMD || entry == HPAGE_PMD) { + mask = PMD_MASK; + size = PMD_SIZE; + } else if (entry == NORMAL_PUD) { + mask = PUD_MASK; + size = PUD_SIZE; + } else + return 0; + + next = (old_addr + size) & mask; + /* even if next overflowed, extent below will be ok */ + extent = (next > old_end) ? old_end - old_addr : next - old_addr; + next = (new_addr + size) & mask; + if (extent > next - new_addr) + extent = next - new_addr; + return extent; +} + +/* + * Attempts to speedup the move by moving entry at the level corresponding to + * pgt_entry. Returns true if the move was successful, else false. + */ +static bool move_pgt_entry(enum pgt_entry entry, struct vm_area_struct *vma, + unsigned long old_addr, unsigned long new_addr, void *old_entry, + void *new_entry, bool need_rmap_locks) +{ + bool moved = false; + + /* See comment in move_ptes() */ + if (need_rmap_locks) + take_rmap_locks(vma); + + switch (entry) { + case NORMAL_PMD: + moved = move_normal_pmd(vma, old_addr, new_addr, old_entry, new_entry); + break; + case NORMAL_PUD: + moved = move_normal_pud(vma, old_addr, new_addr, old_entry, new_entry); + break; + case HPAGE_PMD: + moved = IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && + move_huge_pmd(vma, old_addr, new_addr, old_entry, new_entry); + break; + default: + WARN_ON_ONCE(1); + break; + } + + if (need_rmap_locks) + drop_rmap_locks(vma); + + return moved; +} + unsigned long move_page_tables(struct vm_area_struct *vma, unsigned long old_addr, struct vm_area_struct *new_vma, unsigned long new_addr, unsigned long len, bool need_rmap_locks) { - unsigned long extent, next, old_end; + unsigned long extent, old_end; struct mmu_notifier_range range; pmd_t *old_pmd, *new_pmd; @@ -269,14 +431,27 @@ unsigned long move_page_tables(struct vm_area_struct *vma, for (; old_addr < old_end; old_addr += extent, new_addr += extent) { cond_resched(); - next = (old_addr + PMD_SIZE) & PMD_MASK; - /* even if next overflowed, extent below will be ok */ - extent = next - old_addr; - if (extent > old_end - old_addr) - extent = old_end - old_addr; - next = (new_addr + PMD_SIZE) & PMD_MASK; - if (extent > next - new_addr) - extent = next - new_addr; +#ifdef CONFIG_HAVE_MOVE_PUD + /* + * If extent is PUD-sized try to speed up the move by moving at the + * PUD level if possible. + */ + extent = get_extent(NORMAL_PUD, old_addr, old_end, new_addr); + if (extent == PUD_SIZE) { + pud_t *old_pud, *new_pud; + + old_pud = get_old_pud(vma->vm_mm, old_addr); + if (!old_pud) + continue; + new_pud = alloc_new_pud(vma->vm_mm, vma, new_addr); + if (!new_pud) + break; + if (move_pgt_entry(NORMAL_PUD, vma, old_addr, new_addr, + old_pud, new_pud, need_rmap_locks)) + continue; + } +#endif + extent = get_extent(NORMAL_PMD, old_addr, old_end, new_addr); old_pmd = get_old_pmd(vma->vm_mm, old_addr); if (!old_pmd) continue; @@ -284,18 +459,10 @@ unsigned long move_page_tables(struct vm_area_struct *vma, if (!new_pmd) break; if (is_swap_pmd(*old_pmd) || pmd_trans_huge(*old_pmd) || pmd_devmap(*old_pmd)) { - if (extent == HPAGE_PMD_SIZE) { - bool moved; - /* See comment in move_ptes() */ - if (need_rmap_locks) - take_rmap_locks(vma); - moved = move_huge_pmd(vma, old_addr, new_addr, - old_pmd, new_pmd); - if (need_rmap_locks) - drop_rmap_locks(vma); - if (moved) - continue; - } + if (extent == HPAGE_PMD_SIZE && + move_pgt_entry(HPAGE_PMD, vma, old_addr, new_addr, old_pmd, + new_pmd, need_rmap_locks)) + continue; split_huge_pmd(vma, old_pmd, old_addr); if (pmd_trans_unstable(old_pmd)) continue; @@ -305,15 +472,8 @@ unsigned long move_page_tables(struct vm_area_struct *vma, * If the extent is PMD-sized, try to speed the move by * moving at the PMD level if possible. */ - bool moved; - - if (need_rmap_locks) - take_rmap_locks(vma); - moved = move_normal_pmd(vma, old_addr, new_addr, - old_pmd, new_pmd); - if (need_rmap_locks) - drop_rmap_locks(vma); - if (moved) + if (move_pgt_entry(NORMAL_PMD, vma, old_addr, new_addr, old_pmd, + new_pmd, need_rmap_locks)) continue; #endif } From patchwork Fri Oct 2 16:20:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 11813969 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 891D513B2 for ; Fri, 2 Oct 2020 16:22:29 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5A784206FA for ; Fri, 2 Oct 2020 16:22:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="rvd5DS5N"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="k9sKPkWt" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5A784206FA Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:From:Subject:References:Mime-Version:Message-Id: In-Reply-To:Date:Reply-To:To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=c9gF2Scxu6PyzzT66PGmP1cm+XKpS9Fwm5WJzC9NAfQ=; b=rvd5DS5N5ljbfJpC3TLTqd0on 3ehyyWSy/Ljm1za7LKEzhXm2zDRHO3c6T1bTRRjEY6KttO23P+XwwDL94Qex62fmAkhiiQ0A9db8D L5wNf/mvlD5C/vnxFbAQVs/GeQRfBVl83B2ob+ztGODbs1dBVBcKhsvnauhzYa2x7crTmuSRB5QUm ETmTzionyx7NVVnKwXj6VY287myMsaKhtzr9/WX7s5epEsOHAKhE5MQqUqc1rHWk+qx+pia5hHHDB jjZUWENqWQE66rBH5PFEkvE/q2Eu2fDhaVXJ4HN5DObykrC4hUH2dx4fiM+TH4HlDs0cSu7ajC/Qo PYLH0M7IA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kONp5-0002oN-7i; Fri, 02 Oct 2020 16:22:19 +0000 Received: from mail-qk1-x74a.google.com ([2607:f8b0:4864:20::74a]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kONp1-0002m2-Cc for linux-arm-kernel@lists.infradead.org; Fri, 02 Oct 2020 16:22:16 +0000 Received: by mail-qk1-x74a.google.com with SMTP id h191so1443345qke.1 for ; Fri, 02 Oct 2020 09:22:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:cc; bh=s/Xw/RBCO0n2Bno5i+4SkYcgbDI4qmMOMsdLQNIN2w0=; b=k9sKPkWtGuu4ZGWnqHoEmBSTPfyCRaLKxN4sh1GlVpxOGK13giY9J2fDvWDPjX3Clh TYH8Hre3GaZ8/nIA8+ifpHAGT+ghFEFLUdJ98i1yCBxhNJ9QrKzKR9vsuJdhGLMhk1zM jRbNpFUL/RIhh0uLkk4dGTeWDWn3Xsf8T9Mjf5+vyOSSIvFswNWS2pjJfJeYWvb1nllw YjhjQUal9ax2B4XfW6quUw5G3P9nvwUHIzfEgXNEhGVLdcNy8vs04A1WDWphoHtUThaq VAL43/lPVuz798l5kpoX7JUlKCQG4nshkFweKYl7tfdWKJASLiJBr6LEcZQmfobxorms K6HQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=s/Xw/RBCO0n2Bno5i+4SkYcgbDI4qmMOMsdLQNIN2w0=; b=C8G+w1p7QcrA7gCaEOXsXmfOrsz+mexhmKsbdyvsyabIoLgHj3eeVJwejMFcOaNW2l HHIHLukpObs83tFgXGbddwBLkvUNdW2aPPdq5NIoX58qBFGjBLa1DJMvr372fsMEktlP SjkH82RK9ykrw6mWcs4Lp1v6y7SFKJpk7ZXaauLPSP40sS3F2umrN9xTUtcDJOYQMd2m oKvnPz7He7HOsRadLiieoJMQeXneDksPwzQLcBeEjDnetOh7vZS0nP/f/g6yS0yiMxVC EJrN5gLxhrDV1w+06r69bjr3uVPKK2scRG+qh6LRsUK8HAyQjOo5yGDtMpfThLaZzCfa 6gHg== X-Gm-Message-State: AOAM532puBdSUZYH8EnYZvaQHYRwEprxJGk+J7V6BVGsYAd2M4nrDV4b dTUHsbqIl9cDooMlPtP5blqCNLhEbie4g7NoNQ== X-Google-Smtp-Source: ABdhPJy1XYUcKvGqDmhG+SgmlzqCf1a7mFONorMqdUyr843tXA3FgfFHdqOZFCcNSS+Fax1oPNOyO7OdvdzrlXWIkQ== X-Received: from kaleshsingh.c.googlers.com ([fda3:e722:ac3:10:14:4d90:c0a8:2145]) (user=kaleshsingh job=sendgmr) by 2002:a0c:e904:: with SMTP id a4mr2730548qvo.26.1601655732278; Fri, 02 Oct 2020 09:22:12 -0700 (PDT) Date: Fri, 2 Oct 2020 16:20:49 +0000 In-Reply-To: <20201002162101.665549-1-kaleshsingh@google.com> Message-Id: <20201002162101.665549-5-kaleshsingh@google.com> Mime-Version: 1.0 References: <20201002162101.665549-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.28.0.806.g8561365e88-goog Subject: [PATCH v2 4/6] arm64: Add set_pud_at() function From: Kalesh Singh X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201002_122215_450831_90A2C10E X-CRM114-Status: GOOD ( 15.94 ) X-Spam-Score: -6.5 (------) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-6.5 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:74a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 1.2 MISSING_HEADERS Missing To: header -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: joelaf@google.com, Mark Rutland , Gavin Shan , Brian Geffon , Peter Zijlstra , Catalin Marinas , kaleshsingh@google.com, Ram Pai , Dave Hansen , Will Deacon , lokeshgidra@google.com, linux-kselftest@vger.kernel.org, "H. Peter Anvin" , Christian Brauner , Shuah Khan , Mina Almasry , Jia He , Arnd Bergmann , "Aneesh Kumar K.V" , Masahiro Yamada , x86@kernel.org, Krzysztof Kozlowski , Steven Price , Jason Gunthorpe , Ingo Molnar , Sami Tolvanen , kernel-team@android.com, Hassan Naveed , Masami Hiramatsu , Ralph Campbell , Kees Cook , minchan@google.com, Zhenyu Ye , John Hubbard , Frederic Weisbecker , Mark Brown , Borislav Petkov , Thomas Gleixner , surenb@google.com, linux-arm-kernel@lists.infradead.org, SeongJae Park , linux-mm@kvack.org, Stephen Boyd , "Kirill A. Shutemov" , linux-kernel@vger.kernel.org, Thiago Jung Bauermann , Andrew Morton , Mike Rapoport , Sandipan Das Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org set_pud_at() is used in move_normal_pud() for remapping pages at the PUD level. Signed-off-by: Kalesh Singh --- arch/arm64/include/asm/pgtable.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index d5d3fbe73953..8848125e3024 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -415,6 +415,7 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd) #define pfn_pud(pfn,prot) __pud(__phys_to_pud_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot)) #define set_pmd_at(mm, addr, pmdp, pmd) set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd)) +#define set_pud_at(mm, addr, pudp, pud) set_pte_at(mm, addr, (pte_t *)pudp, pud_pte(pud)) #define __p4d_to_phys(p4d) __pte_to_phys(p4d_pte(p4d)) #define __phys_to_p4d_val(phys) __phys_to_pte_val(phys) From patchwork Fri Oct 2 16:20:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 11813975 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7A6CB139A for ; Fri, 2 Oct 2020 16:22:45 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3416E2085B for ; Fri, 2 Oct 2020 16:22:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="cd7eLBgi"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="hDQngTLl" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3416E2085B Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:From:Subject:References:Mime-Version:Message-Id: In-Reply-To:Date:Reply-To:To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=4NcafjA8AXy/3KR0dJFotjUwsC5aEoAiWxDrv1aynGI=; b=cd7eLBgipbWjzsvLyV50SxiFA 69BV0E34ruejftVXb0MghCAORGfOighk7ttMATN93Z/4JTCKFNwye+enG59HFlU6+qSO72x/t5cwq w3NGNht3Q3alAbpcXXXBxlCZrPt5DlXXZf5t+lnn6oQ/mL38MwDAMvNgNxmpShv/1fQIMvaGA7Yk7 O8ZoOY+wrIs6nMt9NMxutkVdcduQ/rvqyPnIHNl/Ig7Gs+cGgeL1/v4z/yLI7AlCk39wrv/Y50wWZ tYFxJYtlBi7vfSBzJoDp7x+UxyGqCvzvLQN04E4zxSFrY/rQRye22L0Wo6XvVEMVQI+6DAGwVo6Wh MacfbznMQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kONpJ-0002uG-S0; Fri, 02 Oct 2020 16:22:33 +0000 Received: from mail-qv1-xf49.google.com ([2607:f8b0:4864:20::f49]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kONpF-0002qf-Kk for linux-arm-kernel@lists.infradead.org; Fri, 02 Oct 2020 16:22:30 +0000 Received: by mail-qv1-xf49.google.com with SMTP id de12so1139550qvb.12 for ; Fri, 02 Oct 2020 09:22:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:cc; bh=LRJDXnxkqiv1fXN9f+/QaOavubXnRcM3sdCzYDHh/KU=; b=hDQngTLlwEHOqbG2+qPgfBLtWoEqSz0RkrlSy0B5xHc4i/KeZ/R920ykBSSrSrqDgX CEX4Tgn8ZKwrNAh0SCxx3MqEG1DqjESis+XdXBnejPZw0+2u+FY9vYO4kxboKo2PiyEQ 3iBxhoVO+LzKAusmFIm+4JGhC1ynyXra7m+eLxYhBZxVxK4E0ZBejCp4qEzKcGsWpJZE puM1f74zS6RH4HpPzKX98Rg31esYNa7V4eey0WQFt5clFp1JKfoZGwplRwD6lHtSQ9F4 6x6NO+IrtXl6t5Fyg5dLe+13Pu8d05KrIWnvnQtTqOQg/lZsiJgR/56sN9FPpKJIu6P+ hpRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=LRJDXnxkqiv1fXN9f+/QaOavubXnRcM3sdCzYDHh/KU=; b=oEXqNQw9vx1t/SSuS+UljaDJGOUNBgmKXIMPC1wNmj28L7Nnm6BqMuzq4eFmr4E6ky tMPnBwa7zcMLmujSWH2B61w151S7rONpaefZpOl+cdl30RXcx/nM5W7G0BwujHaPVZAx vlmPb+8hj9uNLygafxSMmYbq6g3Rrqsg8mIYneziOKrCxzgXatzkssK7KdM3L8ojocz8 JZWpftxeLI0EZ8j5pDlXc1sofgMAlg57S0HiaYGzju1l8sOcfenrZrLX2cT3wMt0OkKp Mn/l7Wle6VlUjOsBJq6UCYYve341WfW0yV6nbEsuVJKd+X2rjhQEohq5LnGfFgk/nsDD 2DJQ== X-Gm-Message-State: AOAM531QdkjDIqwUogJeO0XEhCbygrWKnAB+fXMvQJLDJ9Dz9Edh4e5k xBd3k8MHLIST6Stqcu8MILMlZZmyabNbf7WQEw== X-Google-Smtp-Source: ABdhPJw5Qv2OwIK9h4CVpf46tlO/7EN7KFe9wLwV6R2ARAWl/2Z2ldNw6qedIQHz6XTQMkbUNtRhsapSAzj+MENlEg== X-Received: from kaleshsingh.c.googlers.com ([fda3:e722:ac3:10:14:4d90:c0a8:2145]) (user=kaleshsingh job=sendgmr) by 2002:a05:6214:b0d:: with SMTP id u13mr2886535qvj.17.1601655745859; Fri, 02 Oct 2020 09:22:25 -0700 (PDT) Date: Fri, 2 Oct 2020 16:20:50 +0000 In-Reply-To: <20201002162101.665549-1-kaleshsingh@google.com> Message-Id: <20201002162101.665549-6-kaleshsingh@google.com> Mime-Version: 1.0 References: <20201002162101.665549-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.28.0.806.g8561365e88-goog Subject: [PATCH v2 5/6] arm64: mremap speedup - Enable HAVE_MOVE_PUD From: Kalesh Singh X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201002_122229_700197_90E47320 X-CRM114-Status: GOOD ( 15.49 ) X-Spam-Score: -6.5 (------) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-6.5 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:f49 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 1.2 MISSING_HEADERS Missing To: header -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: joelaf@google.com, Mark Rutland , Gavin Shan , Brian Geffon , Peter Zijlstra , Catalin Marinas , kaleshsingh@google.com, Ram Pai , linux-mm@kvack.org, Dave Hansen , Will Deacon , lokeshgidra@google.com, linux-kselftest@vger.kernel.org, "H. Peter Anvin" , Christian Brauner , Shuah Khan , Mina Almasry , Jia He , Arnd Bergmann , "Aneesh Kumar K.V" , Masahiro Yamada , x86@kernel.org, Krzysztof Kozlowski , Ingo Molnar , Sami Tolvanen , Ira Weiny , kernel-team@android.com, Hassan Naveed , Masami Hiramatsu , Ralph Campbell , Kees Cook , minchan@google.com, Zhenyu Ye , Sandipan Das , John Hubbard , Frederic Weisbecker , Mark Brown , Borislav Petkov , Thomas Gleixner , surenb@google.com, linux-arm-kernel@lists.infradead.org, SeongJae Park , William Kucharski , linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Andrew Morton , Mike Rapoport , Mike Kravetz Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org HAVE_MOVE_PUD enables remapping pages at the PUD level if both the source and destination addresses are PUD-aligned. With HAVE_MOVE_PUD enabled it can be inferred that there is approximately a 19x improvement in performance on arm64. (See data below). ------- Test Results --------- The following results were obtained using a 5.4 kernel, by remapping a PUD-aligned, 1GB sized region to a PUD-aligned destination. The results from 10 iterations of the test are given below: Total mremap times for 1GB data on arm64. All times are in nanoseconds. Control HAVE_MOVE_PUD 1247761 74271 1219896 46771 1094792 59687 1227760 48385 1043698 76666 1101771 50365 1159896 52500 1143594 75261 1025833 61354 1078125 48697 1134312.6 59395.7 <-- Mean time in nanoseconds A 1GB mremap completion time drops from ~1.1 milliseconds to ~59 microseconds on arm64. (~19x speed up). Signed-off-by: Kalesh Singh --- arch/arm64/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 844d089668e3..4d521f0a5863 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -122,6 +122,7 @@ config ARM64 select HANDLE_DOMAIN_IRQ select HARDIRQS_SW_RESEND select HAVE_MOVE_PMD + select HAVE_MOVE_PUD select HAVE_PCI select HAVE_ACPI_APEI if (ACPI && EFI) select HAVE_ALIGNED_STRUCT_PAGE if SLUB From patchwork Fri Oct 2 16:20:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 11813979 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 12329112C for ; Fri, 2 Oct 2020 16:23:03 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C4C72206FA for ; Fri, 2 Oct 2020 16:23:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="AQ8bE7oW"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="Z2BhCB0i" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C4C72206FA Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:From:Subject:References:Mime-Version:Message-Id: In-Reply-To:Date:Reply-To:To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Rja6W3fxMxM9jMO5PTrVx1ohVjyAnNzOFBe903gh/x0=; b=AQ8bE7oWRVRiWszp/FqjRIPsA P/4movm4TjPKidtGjdgk7dlaQLIzD1+0B7JN0PvwKFXgiQKddKwilTdNw96TtzAd1JK+B4S7/ykiG 1rkLtU5BNr1hBr6jIaaqGRzv6XFFpWSu0rgON8OZaDhNbP4nQjeOdHZBgy7cZ0iGaxLXuZE8XyoVa px63PwNc+Bedm01goIPyrNnffAJ72Vjw9Upz71Lu88jfw61ZusJ+qq+UA+kgFXgFnY5WYJRhWCAwG GU0wwFOcKwrSSkTZ+UOdIHPbMnbuTHkiZP5sWImH/EF7u2L40Kwb+SVpXeOqfxXT8WY0PcFEo866U E0CzjHbeA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kONpW-0002zp-SF; Fri, 02 Oct 2020 16:22:46 +0000 Received: from mail-qt1-x84a.google.com ([2607:f8b0:4864:20::84a]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kONpT-0002vd-6M for linux-arm-kernel@lists.infradead.org; Fri, 02 Oct 2020 16:22:43 +0000 Received: by mail-qt1-x84a.google.com with SMTP id g10so1401830qto.1 for ; Fri, 02 Oct 2020 09:22:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:cc; bh=bMnBZVJZ0gP6yWIahq+ggDCPy47CsnYvJiYw0n333nM=; b=Z2BhCB0ionBjY9f7rJg975U+QnCTCPpylLVtNijw2nkc9jOX0nWeWzaScP/MiSCHxb VlNw+IBd6+iMm5/zsKLjJhp4Av/TRf3MlDYx51iPzzIF3DYfrx3bJCSrtH9C//COkKxc xSYhR8fO9b21qmYKgflHjxUWg6wu/GpUAhEGJPmZeDcdY7rbfDAfRXb8euzLbp12Zh88 edKzBO0V9SHioyYGqGPIsn7hVLnsZnoCKmyX8mkVd22fnpHi1uxrWXuN4V+FYJ+VfUUh jHc4fZgacTCxZJu3LoVwWYVB64D6hIAt2jp6nQ4THkRgwkXaDAkjk/QHsk/CbqUR/Y1b Hr2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=bMnBZVJZ0gP6yWIahq+ggDCPy47CsnYvJiYw0n333nM=; b=n+lpWY7yzSqCCV5o95LC07sMbZzhbvXKPBQ2SjIiJmO2HwRmheOgKlxzcaZ5tfsSAD Kys/rTkX4Hueq++OkY1HNUKYsqJ4sBp+hnhFilyJIQ2imzfhVkjlNa1YaZxfl/lxXVi1 FQYghs9VsGvSKWSYPhoi23kLBo0f+U942zOpe1CvrlMu9EyiWZthDDJ7Up+a9vkswtET 9IR96eGO46AIJixuIwB15VFeS3f6zgp933hocxrZvdpAiNjsLrIHjF7+wYnnzYnQHlf6 F4uvlJ6WBODMbIsrRH/pqGFwk3US5mPXalOGFJAteo4dbC5ZbA5WWXenp45HlKNcpDQY mRDw== X-Gm-Message-State: AOAM533OL/kXOk+8x0tEf6naEewLV6YTiVOgubKB7Q2gIsKwIa7ak8tF 4v9Zt58gSjaHUB5+Hj6Odaqr1SwTGiEWkKl/+A== X-Google-Smtp-Source: ABdhPJwWzjZJWRiWHzB7SAUtAP2b0hza2Y8k4Vz+cdfMrWfqG4bkLxAOeDCVeFU+z7OT3ayRXnDmDOceZZsYYAHZbg== X-Received: from kaleshsingh.c.googlers.com ([fda3:e722:ac3:10:14:4d90:c0a8:2145]) (user=kaleshsingh job=sendgmr) by 2002:ad4:4527:: with SMTP id l7mr3023885qvu.2.1601655756530; Fri, 02 Oct 2020 09:22:36 -0700 (PDT) Date: Fri, 2 Oct 2020 16:20:51 +0000 In-Reply-To: <20201002162101.665549-1-kaleshsingh@google.com> Message-Id: <20201002162101.665549-7-kaleshsingh@google.com> Mime-Version: 1.0 References: <20201002162101.665549-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.28.0.806.g8561365e88-goog Subject: [PATCH v2 6/6] x86: mremap speedup - Enable HAVE_MOVE_PUD From: Kalesh Singh X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201002_122243_236072_DF946D7A X-CRM114-Status: GOOD ( 15.53 ) X-Spam-Score: -6.5 (------) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-6.5 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:84a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 1.2 MISSING_HEADERS Missing To: header -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: joelaf@google.com, Mark Rutland , Gavin Shan , Brian Geffon , Anshuman Khandual , Peter Zijlstra , Catalin Marinas , kaleshsingh@google.com, Ram Pai , Dave Hansen , Will Deacon , lokeshgidra@google.com, linux-kselftest@vger.kernel.org, "H. Peter Anvin" , Christian Brauner , Shuah Khan , Mina Almasry , Jia He , Arnd Bergmann , "Aneesh Kumar K.V" , Masahiro Yamada , x86@kernel.org, Krzysztof Kozlowski , Ingo Molnar , Sami Tolvanen , kernel-team@android.com, Hassan Naveed , Masami Hiramatsu , Ralph Campbell , Kees Cook , minchan@google.com, Zhenyu Ye , John Hubbard , Frederic Weisbecker , Mark Brown , Borislav Petkov , Thomas Gleixner , Zi Yan , surenb@google.com, linux-arm-kernel@lists.infradead.org, SeongJae Park , linux-mm@kvack.org, Stephen Boyd , linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Andrew Morton , Mike Rapoport , Sandipan Das Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org HAVE_MOVE_PUD enables remapping pages at the PUD level if both the source and destination addresses are PUD-aligned. With HAVE_MOVE_PUD enabled it can be inferred that there is approximately a 13x improvement in performance on x86. (See data below). ------- Test Results --------- The following results were obtained using a 5.4 kernel, by remapping a PUD-aligned, 1GB sized region to a PUD-aligned destination. The results from 10 iterations of the test are given below: Total mremap times for 1GB data on x86. All times are in nanoseconds. Control HAVE_MOVE_PUD 180394 15089 235728 14056 238931 25741 187330 13838 241742 14187 177925 14778 182758 14728 160872 14418 205813 15107 245722 13998 205721.5 15594 <-- Mean time in nanoseconds A 1GB mremap completion time drops from ~205 microseconds to ~15 microseconds on x86. (~13x speed up). Signed-off-by: Kalesh Singh --- arch/x86/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 7101ac64bb20..ff6e2755cab8 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -198,6 +198,7 @@ config X86 select HAVE_MIXED_BREAKPOINTS_REGS select HAVE_MOD_ARCH_SPECIFIC select HAVE_MOVE_PMD + select HAVE_MOVE_PUD select HAVE_NMI select HAVE_OPROFILE select HAVE_OPTPROBES