From patchwork Thu Aug 21 15:43:30 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 4758621 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 9B6F19F2E9 for ; Thu, 21 Aug 2014 15:46:24 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id AA9422017A for ; Thu, 21 Aug 2014 15:46:23 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D2D2D20155 for ; Thu, 21 Aug 2014 15:46:22 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XKUXf-00012p-Qj; Thu, 21 Aug 2014 15:44:47 +0000 Received: from mail-we0-f172.google.com ([74.125.82.172]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1XKUX5-0000QN-SL for linux-arm-kernel@lists.infradead.org; Thu, 21 Aug 2014 15:44:13 +0000 Received: by mail-we0-f172.google.com with SMTP id x48so9604962wes.31 for ; Thu, 21 Aug 2014 08:43:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=hpwbGZLoxgifSMjK2hc9Z2DOxy9R5LEHqmX7WJiZd5Y=; b=mZknRgsGNRrybi+LRZDqG3+c6r9D9kFwKSkaqlLBz7Fgu07rlEpa6gALMKyikUi9y8 6d6s2WJHLQRk7D4GIKy+cZmfpmO1vxiM1tFbxMEZBcdGgPdWVrcdBtFsUM/PSaa0//rv NjUh1kMgM38a+MKAVMl9o/dAX4Tn8T99DKu6N2hpydJ08HWJJgo627Ex4xEvwwTh8nig bcuDpus8P3tsG3OIQ8pM+2jMDjTU/3ZG8qOEtSGpTFEGM05A4WVp7nmhynWF5SHxOwho qk78Dh55FZ+hEIkjuFeAwDMY7pRgW8cyhTKrzrx8tQzetmMza78tgeolpNuXpTJsk89D PG6g== X-Gm-Message-State: ALoCoQm4hqtfMhrn9zeD4gwykAwlGKBMqSLZvJPeKlspMnqsTS6T4keUHON780tKcOj2rcmQ8VjK X-Received: by 10.194.187.4 with SMTP id fo4mr68034137wjc.35.1408635828801; Thu, 21 Aug 2014 08:43:48 -0700 (PDT) Received: from marmot.wormnet.eu (marmot.wormnet.eu. [188.246.204.87]) by mx.google.com with ESMTPSA id wi9sm67591585wjc.23.2014.08.21.08.43.47 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 21 Aug 2014 08:43:47 -0700 (PDT) From: Steve Capper To: linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, linux@arm.linux.org.uk, linux-arch@vger.kernel.org, linux-mm@kvack.org Subject: [PATH V2 4/6] arm: mm: Enable RCU fast_gup Date: Thu, 21 Aug 2014 16:43:30 +0100 Message-Id: <1408635812-31584-5-git-send-email-steve.capper@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1408635812-31584-1-git-send-email-steve.capper@linaro.org> References: <1408635812-31584-1-git-send-email-steve.capper@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140821_084412_075097_881B916A X-CRM114-Status: GOOD ( 11.10 ) X-Spam-Score: -0.7 (/) Cc: mark.rutland@arm.com, anders.roxell@linaro.org, peterz@infradead.org, gary.robertson@linaro.org, will.deacon@arm.com, mgorman@suse.de, dann.frazier@canonical.com, Steve Capper , akpm@linux-foundation.org, christoffer.dall@linaro.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Activate the RCU fast_gup for ARM. We also need to force THP splits to broadcast an IPI s.t. we block in the fast_gup page walker. As THP splits are comparatively rare, this should not lead to a noticeable performance degradation. Some pre-requisite functions pud_write and pud_page are also added. Signed-off-by: Steve Capper Reviewed-by: Catalin Marinas --- arch/arm/Kconfig | 4 ++++ arch/arm/include/asm/pgtable-3level.h | 8 ++++++++ arch/arm/mm/flush.c | 15 +++++++++++++++ 3 files changed, 27 insertions(+) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index cc740d2..21f12be 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -1645,6 +1645,10 @@ config ARCH_SELECT_MEMORY_MODEL config HAVE_ARCH_PFN_VALID def_bool ARCH_HAS_HOLES_MEMORYMODEL || !SPARSEMEM +config HAVE_RCU_GUP + def_bool y + depends on ARM_LPAE + config HIGHMEM bool "High Memory Support" depends on MMU diff --git a/arch/arm/include/asm/pgtable-3level.h b/arch/arm/include/asm/pgtable-3level.h index 16122d4..a31ecdad 100644 --- a/arch/arm/include/asm/pgtable-3level.h +++ b/arch/arm/include/asm/pgtable-3level.h @@ -224,6 +224,8 @@ static inline pte_t pte_mkspecial(pte_t pte) #define __HAVE_ARCH_PMD_WRITE #define pmd_write(pmd) (pmd_isclear((pmd), L_PMD_SECT_RDONLY)) #define pmd_dirty(pmd) (pmd_isset((pmd), L_PMD_SECT_DIRTY)) +#define pud_page(pud) pmd_page(__pmd(pud_val(pud))) +#define pud_write(pud) pmd_write(__pmd(pud_val(pud))) #define pmd_hugewillfault(pmd) (!pmd_young(pmd) || !pmd_write(pmd)) #define pmd_thp_or_huge(pmd) (pmd_huge(pmd) || pmd_trans_huge(pmd)) @@ -231,6 +233,12 @@ static inline pte_t pte_mkspecial(pte_t pte) #ifdef CONFIG_TRANSPARENT_HUGEPAGE #define pmd_trans_huge(pmd) (pmd_val(pmd) && !pmd_table(pmd)) #define pmd_trans_splitting(pmd) (pmd_isset((pmd), L_PMD_SECT_SPLITTING)) + +#ifdef CONFIG_HAVE_RCU_TABLE_FREE +#define __HAVE_ARCH_PMDP_SPLITTING_FLUSH +void pmdp_splitting_flush(struct vm_area_struct *vma, unsigned long address, + pmd_t *pmdp); +#endif #endif #define PMD_BIT_FUNC(fn,op) \ diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c index 43d54f5..265b836 100644 --- a/arch/arm/mm/flush.c +++ b/arch/arm/mm/flush.c @@ -400,3 +400,18 @@ void __flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned l */ __cpuc_flush_dcache_area(page_address(page), PAGE_SIZE); } + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +#ifdef CONFIG_HAVE_RCU_TABLE_FREE +void pmdp_splitting_flush(struct vm_area_struct *vma, unsigned long address, + pmd_t *pmdp) +{ + pmd_t pmd = pmd_mksplitting(*pmdp); + VM_BUG_ON(address & ~PMD_MASK); + set_pmd_at(vma->vm_mm, address, pmdp, pmd); + + /* dummy IPI to serialise against fast_gup */ + kick_all_cpus_sync(); +} +#endif /* CONFIG_HAVE_RCU_TABLE_FREE */ +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */