From patchwork Tue Mar 8 23:59:47 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Daney X-Patchwork-Id: 8538521 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id D5F35C0553 for ; Wed, 9 Mar 2016 00:03:05 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id F0C652020F for ; Wed, 9 Mar 2016 00:03:04 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 26CB620204 for ; Wed, 9 Mar 2016 00:03:04 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1adRZG-0007yD-3c; Wed, 09 Mar 2016 00:01:34 +0000 Received: from mail-pf0-x241.google.com ([2607:f8b0:400e:c00::241]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1adRYC-0007Bv-TV for linux-arm-kernel@lists.infradead.org; Wed, 09 Mar 2016 00:00:31 +0000 Received: by mail-pf0-x241.google.com with SMTP id 63so2217445pfe.0 for ; Tue, 08 Mar 2016 16:00:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=IbKOHfXCEMykHnnIoCYaJlyVN0Z7wbE8OBnQJLc3Khw=; b=G+kgPpVnTbrHfMEE8i0MW8/S+v26MNMpRP+qvieFgvgYbL+I1d4HrwRnPqVh6k7pTN SsnDMRUtLH16aWh8M0D5XqXeMYSFLLkfw/L/1zpBN2IUwgECW1NS1v2UegDBYceneNq9 Oumk3D/ES8T0UOi1hbZ7cSpxx5aE7vmyDYVua/UBM/SLJ9ETHaQas23+ZRnKIbBsIzkI UsL+DqNUJIv0hjafprgp4x/Yhd+qN7ENvrVJL9BnxH2IHLr4p8Cml58L8fKhXonVYyTE mB+so/fGU9zqU+gA0ZBKZ0u0Yl7qCg/969M1TEjXTrwy5b/hKCFwqSrGNXl5Y3l26e5s PWUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=IbKOHfXCEMykHnnIoCYaJlyVN0Z7wbE8OBnQJLc3Khw=; b=TLbvJtpNIS40WtYo5opXVw4AOx4Bok25quidACJRIs+q9R/oPnBtgXjseofphUTt6v ImK1/u2aTQpKt64/4LMNH1VgSE7UQ1Rm9gi3qtoUX5e61sOCwfWc/tiZYcPx4915DAel 06w5hg2O8A0S9USzGvVLSQByJI1mtvrfyqKTFNuGCV+iy3kFrXHHFj2b+ra0EN8TXtlq n8WvGs/i+4K/34guWlNGL0HM5WrRnnfrFLwc+MVxnpDX3Tb1N5q/x4zCMBzUZ6Uukn4L HY9me81LD7gRIReGKkGVGp0vxqWsifmdR+uNynaXaiXyv6w/kvRGIMXW/Ux2HJGmXhnA 3G6w== X-Gm-Message-State: AD7BkJKQf2C+4Vz6aakZWBf+AgKLVcGOeqnvpxjVxUd0xDsBK6+YCJgLOBdDIFkJe6iuiQ== X-Received: by 10.98.72.8 with SMTP id v8mr36036568pfa.33.1457481608621; Tue, 08 Mar 2016 16:00:08 -0800 (PST) Received: from dl.caveonetworks.com ([64.2.3.194]) by smtp.gmail.com with ESMTPSA id m87sm7427690pfj.38.2016.03.08.16.00.00 (version=TLS1 cipher=AES128-SHA bits=128/128); Tue, 08 Mar 2016 16:00:07 -0800 (PST) Received: from dl.caveonetworks.com (localhost.localdomain [127.0.0.1]) by dl.caveonetworks.com (8.14.5/8.14.5) with ESMTP id u28Nxx5K009040; Tue, 8 Mar 2016 15:59:59 -0800 Received: (from ddaney@localhost) by dl.caveonetworks.com (8.14.5/8.14.5/Submit) id u28Nxx7v009039; Tue, 8 Mar 2016 15:59:59 -0800 From: David Daney To: Will Deacon , linux-arm-kernel@lists.infradead.org, Rob Herring , Frank Rowand , Grant Likely , Pawel Moll , Ian Campbell , Kumar Gala , Ganapatrao Kulkarni , Robert Richter , Ard Biesheuvel , Matt Fleming , Mark Rutland , Catalin Marinas Subject: [PATCH v15 6/6] arm64, mm, numa: Add NUMA balancing support for arm64. Date: Tue, 8 Mar 2016 15:59:47 -0800 Message-Id: <1457481587-8976-7-git-send-email-ddaney.cavm@gmail.com> X-Mailer: git-send-email 1.7.11.7 In-Reply-To: <1457481587-8976-1-git-send-email-ddaney.cavm@gmail.com> References: <1457481587-8976-1-git-send-email-ddaney.cavm@gmail.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160308_160029_186904_46769BFE X-CRM114-Status: GOOD ( 13.39 ) X-Spam-Score: -2.7 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: devicetree@vger.kernel.org, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, David Daney MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Ganapatrao Kulkarni Enable NUMA balancing for arm64 platforms. Add pte, pmd protnone helpers for use by automatic NUMA balancing. Reviewed-by: Robert Richter Signed-off-by: Ganapatrao Kulkarni Signed-off-by: David Daney --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/pgtable.h | 15 +++++++++++++++ 2 files changed, 16 insertions(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 7013087..20f5192 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -11,6 +11,7 @@ config ARM64 select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST select ARCH_USE_CMPXCHG_LOCKREF select ARCH_SUPPORTS_ATOMIC_RMW + select ARCH_SUPPORTS_NUMA_BALANCING select ARCH_WANT_OPTIONAL_GPIOLIB select ARCH_WANT_COMPAT_IPC_PARSE_VERSION select ARCH_WANT_FRAME_POINTERS diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 819aff5..2150cb7 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -348,6 +348,21 @@ static inline pgprot_t mk_sect_prot(pgprot_t prot) return __pgprot(pgprot_val(prot) & ~PTE_TABLE_BIT); } +#ifdef CONFIG_NUMA_BALANCING +/* + * See the comment in include/asm-generic/pgtable.h + */ +static inline int pte_protnone(pte_t pte) +{ + return (pte_val(pte) & (PTE_VALID | PTE_PROT_NONE)) == PTE_PROT_NONE; +} + +static inline int pmd_protnone(pmd_t pmd) +{ + return pte_protnone(pmd_pte(pmd)); +} +#endif + /* * THP definitions. */