From patchwork Thu Nov 8 18:12:00 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joel Fernandes X-Patchwork-Id: 10674843 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id ABEB414E2 for ; Thu, 8 Nov 2018 18:12:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9E3C52E09D for ; Thu, 8 Nov 2018 18:12:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 912252E0A5; Thu, 8 Nov 2018 18:12:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DB8142E09D for ; Thu, 8 Nov 2018 18:12:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1338D6B062E; Thu, 8 Nov 2018 13:12:19 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 098686B0630; Thu, 8 Nov 2018 13:12:19 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E05046B0631; Thu, 8 Nov 2018 13:12:18 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by kanga.kvack.org (Postfix) with ESMTP id 937076B062E for ; Thu, 8 Nov 2018 13:12:18 -0500 (EST) Received: by mail-pl1-f197.google.com with SMTP id g15-v6so12683414plq.4 for ; Thu, 08 Nov 2018 10:12:18 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=BRzIw96c42MBsppp+UtkS27cZw2ziHLGOZAWHCwUwkU=; b=fKFYjYOaYe7UB7+/kQRAXjCRmJPXn4dHZuybH1bAHI/OMr9f+BeNkrgTnIOFu53FhI G5wcnxVR9AUfOVxCV+mXTX9QFxbtEnPjjeg9E2HvVIACUc16Q7NzFib1418xnEQ8MWDl ruAx6HhuyOof2R3jeB9u0GANCDzaYSVRHQxQLVri/192PyGEgWgbxqLPPqB7JlHLuZyI q/jutke2MBOBRrQO673QNUzCLgiyhYoe79xRbXOWvG113pREwrJyfbYmhIkyk1mdMaNa JO5cIKSDFv8RaEVDnPJV4ibFztNOyISQpc4O+/LRdFQc+MY2GiJYsdjEu1ec9BqizduB fbnw== X-Gm-Message-State: AGRZ1gKpaWSgqqGrtH/qr3OA9AwXT/muZsLq1JEfbkyUacbupkgj5iz1 CgNZ/jy/hDfwtlF0QgJ3MpX01T8G13q59a8BvbR4maYz3bqJmLHQwgLmcMv25VL5PeEPSF3nlvX ZPexZ5NYTUClIEiXqX2fLGBlWrzIw4fVpUrJSGCK575Q8dAZaur6un/A6+Hq+BzrbEL8hwW8uZ8 9X8jXE4CBOAwlxbvtRVz8KTmPlJA7DGPJdy7sXDYzZBtez/tFI8hLSwKdbz4sQ0InmkEFHbE/cd Vmn/Y+me8W9orkQd6mtJhKrcKy2DUGQIBAuZHLvbK9/vTbFQ/6RbsBaS4WEObvmIvfmsHVDNkKo aZaHEVolkiSfWgHKkUTKV583zZO4P2jdi3GcOVd5PiDpiPWNWXeJ7Ec9QZbUmqu93hDzusruazX N X-Received: by 2002:a17:902:aa46:: with SMTP id c6-v6mr5625563plr.182.1541700738184; Thu, 08 Nov 2018 10:12:18 -0800 (PST) X-Received: by 2002:a17:902:aa46:: with SMTP id c6-v6mr5625478plr.182.1541700737065; Thu, 08 Nov 2018 10:12:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541700737; cv=none; d=google.com; s=arc-20160816; b=EiOsqYWUOmFaZW0WWvXsDbhGC12kGrMZBDenTDnCl4Jkk3Ld0AbA3dq5GTEBrbeuGq HdZfxgGB4uu82JqTo9sZHUZrerFc0DsQyLNR5re6NRI5PP+S6ofDR3qb5vuXMWBUwP17 Kdp4jZJkG1Jfk0XML3kmDvy96hzL/Ji/1JEG2wJyBOGb2p9x3zjXS5/Ptms4DFciv3v3 DZlXLF+NHf/+VNFv3rAm5cY4OgSuhx/9DSIPJfxcQw0xXHJirDtfAHmsLpAOgZTmNQbK MQyxwJw1QuYNCAbjtupK6tHTJ0dHZ4lYphdCPhqVKBwsbDK/ko1oTUiet/EKgeep9lm3 0X/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=BRzIw96c42MBsppp+UtkS27cZw2ziHLGOZAWHCwUwkU=; b=YV1NeaCMMAEeP6d/8eNzfcxZfZ4A6HKUCfwEnTVeX0VVeTmFiDf9q3RA0hh/m+cwcV Q9IjW4djju7uj3UhU1QAiVmKrCxdXw8SGE796jJwBGtmu3hPLF2e77E3eYNE7XfsC2FM 3u0ltyPX24Sf++c4h9WVKPeYJQF8hhdGAx+NrxoAa6q66vPxXfboSGVhDHebgvA3KlIo G+s/gQDFS4jEW/m4DkWpNEsCqB2beTHXx5hb+adVLNAbWHWSENiVQoQQ4LMWNHXBNP5Y ZjHgJgkJa0W6mNFfsCKaFB3pfvbmvnsUkwk1ZG0WDMYW/5q/TWM5ZTAu0CPwaJX2YFai FFpw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@joelfernandes.org header.s=google header.b=VLb+fI0e; spf=pass (google.com: domain of joel@joelfernandes.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=joel@joelfernandes.org Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id r200-v6sor5441841pgr.30.2018.11.08.10.12.16 for (Google Transport Security); Thu, 08 Nov 2018 10:12:16 -0800 (PST) Received-SPF: pass (google.com: domain of joel@joelfernandes.org designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@joelfernandes.org header.s=google header.b=VLb+fI0e; spf=pass (google.com: domain of joel@joelfernandes.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=joel@joelfernandes.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=joelfernandes.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BRzIw96c42MBsppp+UtkS27cZw2ziHLGOZAWHCwUwkU=; b=VLb+fI0e5CupBhLX7SEpqp6aMYCEmrno2/+PWx1Fj5vgMnrhqgQj4nIpkri0wUpbG8 F4GPWoT5NbHXAaJTdGVfn6zckMlMvaTWgcceAQNHhxzEznYU/sGs1Oziz5WMuk+Y2lPY nZdjL8GBrbofS49q8gzGHo1DibcI7hQLaT5lY= X-Google-Smtp-Source: AJdET5d03oyVAX8rvXz4lqxLxyGhZaxEcaDz6tsUNfJPuVe8tAAaxUTzs4svhnT0ivYmEFumZgA8Jg== X-Received: by 2002:a63:2b01:: with SMTP id r1mr4588917pgr.432.1541700736301; Thu, 08 Nov 2018 10:12:16 -0800 (PST) Received: from joelaf.mtv.corp.google.com ([2620:0:1000:1601:3aef:314f:b9ea:889f]) by smtp.gmail.com with ESMTPSA id 64-v6sm10028533pfa.120.2018.11.08.10.12.13 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 08 Nov 2018 10:12:15 -0800 (PST) From: Joel Fernandes X-Google-Original-From: Joel Fernandes To: linux-kernel@vger.kernel.org Cc: kernel-team@android.com, "Joel Fernandes (Google)" , "Kirill A . Shutemov" , William Kucharski , akpm@linux-foundation.org, Andrey Ryabinin , Andy Lutomirski , anton.ivanov@kot-begemot.co.uk, Borislav Petkov , Catalin Marinas , Chris Zankel , dancol@google.com, Dave Hansen , "David S. Miller" , Fenghua Yu , Geert Uytterhoeven , Guan Xuetao , Helge Deller , hughd@google.com, Ingo Molnar , "James E.J. Bottomley" , Jeff Dike , Jonas Bonn , Julia Lawall , kasan-dev@googlegroups.com, kvmarm@lists.cs.columbia.edu, Ley Foon Tan , linux-alpha@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mips@linux-mips.org, linux-mm@kvack.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-um@lists.infradead.org, linux-xtensa@linux-xtensa.org, lokeshgidra@google.com, Max Filippov , Michal Hocko , minchan@kernel.org, nios2-dev@lists.rocketboards.org, pantin@google.com, Peter Zijlstra , Richard Weinberger , Rich Felker , Sam Creasey , sparclinux@vger.kernel.org, Stafford Horne , Stefan Kristiansson , Thomas Gleixner , Tony Luck , Will Deacon , x86@kernel.org (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)), Yoshinori Sato Subject: [PATCH -next-akpm 2/3] mm: speed up mremap by 20x on large regions (v5) Date: Thu, 8 Nov 2018 10:12:00 -0800 Message-Id: <20181108181201.88826-3-joelaf@google.com> X-Mailer: git-send-email 2.19.1.930.g4563a0d9d0-goog In-Reply-To: <20181108181201.88826-1-joelaf@google.com> References: <20181108181201.88826-1-joelaf@google.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: "Joel Fernandes (Google)" Android needs to mremap large regions of memory during memory management related operations. The mremap system call can be really slow if THP is not enabled. The bottleneck is move_page_tables, which is copying each pte at a time, and can be really slow across a large map. Turning on THP may not be a viable option, and is not for us. This patch speeds up the performance for non-THP system by copying at the PMD level when possible. The speed up is an order of magnitude on x86 (~20x). On a 1GB mremap, the mremap completion times drops from 3.4-3.6 milliseconds to 144-160 microseconds. Before: Total mremap time for 1GB data: 3521942 nanoseconds. Total mremap time for 1GB data: 3449229 nanoseconds. Total mremap time for 1GB data: 3488230 nanoseconds. After: Total mremap time for 1GB data: 150279 nanoseconds. Total mremap time for 1GB data: 144665 nanoseconds. Total mremap time for 1GB data: 158708 nanoseconds. Incase THP is enabled, the optimization is mostly skipped except in certain situations. Acked-by: Kirill A. Shutemov Reviewed-by: William Kucharski Signed-off-by: Joel Fernandes (Google) --- Note that since the bug fix in [1], we now have to flush the TLB every PMD move. The above numbers were obtained on x86 with a flush done every move. For arm64, I previously encountered performance issues doing a flush everytime we move, however Will Deacon says [2] the performance should be better now with recent release. Until we can evaluate arm64, I am dropping the HAVE_MOVE_PMD config enable patch for ARM64 for now. It can be added back once we finish the performance evaluation. Also of note is that the speed up on arm64 with this patch but without the TLB flush every PMD move is around 500x. [1] https://bugs.chromium.org/p/project-zero/issues/detail?id=1695 [2] https://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg140837.html arch/Kconfig | 5 +++++ mm/mremap.c | 62 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 67 insertions(+) diff --git a/arch/Kconfig b/arch/Kconfig index e1e540ffa979..b70c952ac838 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -535,6 +535,11 @@ config HAVE_IRQ_TIME_ACCOUNTING Archs need to ensure they use a high enough resolution clock to support irq time accounting and then call enable_sched_clock_irqtime(). +config HAVE_MOVE_PMD + bool + help + Archs that select this are able to move page tables at the PMD level. + config HAVE_ARCH_TRANSPARENT_HUGEPAGE bool diff --git a/mm/mremap.c b/mm/mremap.c index 7c9ab747f19d..2591e512373a 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -191,6 +191,50 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, drop_rmap_locks(vma); } +static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, + unsigned long new_addr, unsigned long old_end, + pmd_t *old_pmd, pmd_t *new_pmd) +{ + spinlock_t *old_ptl, *new_ptl; + struct mm_struct *mm = vma->vm_mm; + pmd_t pmd; + + if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK) + || old_end - old_addr < PMD_SIZE) + return false; + + /* + * The destination pmd shouldn't be established, free_pgtables() + * should have release it. + */ + if (WARN_ON(!pmd_none(*new_pmd))) + return false; + + /* + * We don't have to worry about the ordering of src and dst + * ptlocks because exclusive mmap_sem prevents deadlock. + */ + old_ptl = pmd_lock(vma->vm_mm, old_pmd); + new_ptl = pmd_lockptr(mm, new_pmd); + if (new_ptl != old_ptl) + spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); + + /* Clear the pmd */ + pmd = *old_pmd; + pmd_clear(old_pmd); + + VM_BUG_ON(!pmd_none(*new_pmd)); + + /* Set the new pmd */ + set_pmd_at(mm, new_addr, new_pmd, pmd); + flush_tlb_range(vma, old_addr, old_addr + PMD_SIZE); + if (new_ptl != old_ptl) + spin_unlock(new_ptl); + spin_unlock(old_ptl); + + return true; +} + unsigned long move_page_tables(struct vm_area_struct *vma, unsigned long old_addr, struct vm_area_struct *new_vma, unsigned long new_addr, unsigned long len, @@ -237,7 +281,25 @@ unsigned long move_page_tables(struct vm_area_struct *vma, split_huge_pmd(vma, old_pmd, old_addr); if (pmd_trans_unstable(old_pmd)) continue; + } else if (extent == PMD_SIZE) { +#ifdef CONFIG_HAVE_MOVE_PMD + /* + * If the extent is PMD-sized, try to speed the move by + * moving at the PMD level if possible. + */ + bool moved; + + if (need_rmap_locks) + take_rmap_locks(vma); + moved = move_normal_pmd(vma, old_addr, new_addr, + old_end, old_pmd, new_pmd); + if (need_rmap_locks) + drop_rmap_locks(vma); + if (moved) + continue; +#endif } + if (pte_alloc(new_vma->vm_mm, new_pmd)) break; next = (new_addr + PMD_SIZE) & PMD_MASK;