From patchwork Fri Apr 8 22:50:28 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Daney X-Patchwork-Id: 8786771 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 7F9519F659 for ; Fri, 8 Apr 2016 22:54:02 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B2AA8202FF for ; Fri, 8 Apr 2016 22:54:01 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CF161202F2 for ; Fri, 8 Apr 2016 22:54:00 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1aofGG-00027G-1j; Fri, 08 Apr 2016 22:52:20 +0000 Received: from mail-pa0-x243.google.com ([2607:f8b0:400e:c03::243]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1aofF6-0001Pj-Uz for linux-arm-kernel@lists.infradead.org; Fri, 08 Apr 2016 22:51:10 +0000 Received: by mail-pa0-x243.google.com with SMTP id q6so9908423pav.0 for ; Fri, 08 Apr 2016 15:50:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=7+OrbvyJ9KINdlEZeyfOaFi6shiuk+arL+vvp3b6PVA=; b=vG/wvZoyOsHKD1cunKxMO5itvkEHzfhe0fotqYocAr4dGfy8mMikPFisjgFGADDSSv 9zHcGtdMV60+LpBlmHgs7to6yZ0/ZUIIyp5kkQEKugOcDzw5qrWQAr42inccyMAY7BYg d+G2CtAN0jWC0baK0hHbR4O8m2QQpvqJPqhf6FEeMzcBCAsipXE48A89Z6GviNrltnSQ rnLHuDlTZ7sPSGGhY6JBjMSt2yAb/4/iE1ZApQ+da0rGyPiqDZzD6QlmZDrDRwozUqf/ 4CvHpW6OC04ylc/UlLN1YdxblXw5vcUDn/cbQeuNZ39lqYO6cxNuYxyDjQ4cnULXQIL/ dEBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=7+OrbvyJ9KINdlEZeyfOaFi6shiuk+arL+vvp3b6PVA=; b=OQLfYPOWpbDg+L96223DVZnJZXIfb/f1wuR7Y6SIfsCzrM1sU+1XNAyEvQaJpbsfYd lcsUCOT68Dx8D++58+5IItTgg0ROZhtxJcNaD22mquEiZLUC6g9rXQSc9rfjPkJjoSTr igVGcTjP3nr03TcVNJpH6PzLRK1Q2zYFFUw/+5G4BlRQ9nDhsaCA8hEI0afAmVNz5gmm UKCpvU/Ae3Z9htTqjLJg15UDduZhflq7geL8CmBlNZVnnEyozPNw20ZI+JdqLZIaoC59 AkwdvDhYryc5Fd5Gq4ELDdaG5LcmjC+MSfXTls8kjBT/LP1phYIPigHLNyFK85Xq2hQf yEqA== X-Gm-Message-State: AD7BkJL3KB9/lno2sIBloNvXa2wq07Fgvu9O4Cz7gwzbQIC2ez+uMmMLUNwBKZLbQjS08A== X-Received: by 10.66.66.167 with SMTP id g7mr15583538pat.111.1460155848324; Fri, 08 Apr 2016 15:50:48 -0700 (PDT) Received: from dl.caveonetworks.com ([64.2.3.194]) by smtp.gmail.com with ESMTPSA id 20sm20836861pfj.80.2016.04.08.15.50.40 (version=TLS1 cipher=AES128-SHA bits=128/128); Fri, 08 Apr 2016 15:50:40 -0700 (PDT) Received: from dl.caveonetworks.com (localhost.localdomain [127.0.0.1]) by dl.caveonetworks.com (8.14.5/8.14.5) with ESMTP id u38ModSc008755; Fri, 8 Apr 2016 15:50:39 -0700 Received: (from ddaney@localhost) by dl.caveonetworks.com (8.14.5/8.14.5/Submit) id u38Mod77008754; Fri, 8 Apr 2016 15:50:39 -0700 From: David Daney To: Will Deacon , linux-arm-kernel@lists.infradead.org, Rob Herring , Frank Rowand , Grant Likely , Pawel Moll , Ian Campbell , Kumar Gala , Ganapatrao Kulkarni , Robert Richter , Ard Biesheuvel , Matt Fleming , Mark Rutland , Catalin Marinas Subject: [PATCH v16 6/6] arm64, mm, numa: Add NUMA balancing support for arm64. Date: Fri, 8 Apr 2016 15:50:28 -0700 Message-Id: <1460155828-8690-7-git-send-email-ddaney.cavm@gmail.com> X-Mailer: git-send-email 1.7.11.7 In-Reply-To: <1460155828-8690-1-git-send-email-ddaney.cavm@gmail.com> References: <1460155828-8690-1-git-send-email-ddaney.cavm@gmail.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160408_155109_179217_161C8943 X-CRM114-Status: GOOD ( 13.62 ) X-Spam-Score: -2.7 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devicetree@vger.kernel.org, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, David Daney MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-5.1 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Ganapatrao Kulkarni Enable NUMA balancing for arm64 platforms. Add pte, pmd protnone helpers for use by automatic NUMA balancing. Reviewed-by: Robert Richter Signed-off-by: Ganapatrao Kulkarni Signed-off-by: David Daney Reviewed-by: Steve Capper --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/pgtable.h | 15 +++++++++++++++ 2 files changed, 16 insertions(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 99f9b55..a578080 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -11,6 +11,7 @@ config ARM64 select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST select ARCH_USE_CMPXCHG_LOCKREF select ARCH_SUPPORTS_ATOMIC_RMW + select ARCH_SUPPORTS_NUMA_BALANCING select ARCH_WANT_OPTIONAL_GPIOLIB select ARCH_WANT_COMPAT_IPC_PARSE_VERSION select ARCH_WANT_FRAME_POINTERS diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 989fef1..89b8f20 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -272,6 +272,21 @@ static inline pgprot_t mk_sect_prot(pgprot_t prot) return __pgprot(pgprot_val(prot) & ~PTE_TABLE_BIT); } +#ifdef CONFIG_NUMA_BALANCING +/* + * See the comment in include/asm-generic/pgtable.h + */ +static inline int pte_protnone(pte_t pte) +{ + return (pte_val(pte) & (PTE_VALID | PTE_PROT_NONE)) == PTE_PROT_NONE; +} + +static inline int pmd_protnone(pmd_t pmd) +{ + return pte_protnone(pmd_pte(pmd)); +} +#endif + /* * THP definitions. */