From patchwork Mon Jun 16 14:32:38 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 4359721 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 0BC7ABEEAA for ; Mon, 16 Jun 2014 14:41:33 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 0BB4820274 for ; Mon, 16 Jun 2014 14:41:32 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 209EA20131 for ; Mon, 16 Jun 2014 14:41:31 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WwXyT-0007bQ-84; Mon, 16 Jun 2014 14:33:29 +0000 Received: from mail-we0-f182.google.com ([74.125.82.182]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WwXyI-0007T8-Ae for linux-arm-kernel@lists.infradead.org; Mon, 16 Jun 2014 14:33:18 +0000 Received: by mail-we0-f182.google.com with SMTP id q59so5895781wes.27 for ; Mon, 16 Jun 2014 07:32:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Y79dVBfvKqGkj4FdlsZZeD9RfWMkexenuhP/ENoHZWg=; b=RjMFfEW5bU/8vq6z35ew00pNtZKEZtrZrEsxGqziS7N3QHqEStSZMWBHf2rcqZUiRW w94MV74/GdawDCrfCZq+eocikvv1LltuibIEmhDpZSdItoZNBK9jYXz3rchYXzh/ADow IpNYznmuG3BH9e/03wzvS7ytKHx7QYLlVhhtx72qJyE9buKH2SWHuhd2GfO+jACWRua8 waz1xMVJKr3rygkgzA58btrN3QdjlZT3fznBzqi+7u5G8k/QIW9wXg5qqkNquDYzRqO5 UnaAMLs0ze30vnr4q4H36aDe9lcysdbY8taXlRF/e8wKgNGDuUgDakHBi5XgTkYcphKX 6KfA== X-Gm-Message-State: ALoCoQlgjd4OYnhzeH605570RZfcUNWYy8/kWbcMD+5th6ibZ5ljmDsCTiLwCozJaFeINspeQ1zh X-Received: by 10.195.13.79 with SMTP id ew15mr28699990wjd.19.1402929175503; Mon, 16 Jun 2014 07:32:55 -0700 (PDT) Received: from marmot.wormnet.eu (marmot.wormnet.eu. [188.246.204.87]) by mx.google.com with ESMTPSA id gp15sm19014174wjc.10.2014.06.16.07.32.54 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 16 Jun 2014 07:32:54 -0700 (PDT) From: Steve Capper To: linux-arm-kernel@lists.infradead.org, linux@arm.linux.org.uk Subject: [PATCH V4 1/2] arm: mm: Introduce {pte, pmd}_isset and {pte, pmd}_isclear Date: Mon, 16 Jun 2014 15:32:38 +0100 Message-Id: <1402929159-11028-2-git-send-email-steve.capper@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1402929159-11028-1-git-send-email-steve.capper@linaro.org> References: <1402929159-11028-1-git-send-email-steve.capper@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140616_073318_513560_94D2381B X-CRM114-Status: GOOD ( 11.71 ) X-Spam-Score: -0.7 (/) Cc: lauraa@codeaurora.org, catalin.marinas@arm.com, will.deacon@arm.com, keescook@chromium.org, Steve Capper X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.9 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Long descriptors on ARM are 64 bits, and some pte functions such as pte_dirty return a bitwise-and of a flag with the pte value. If the flag to be tested resides in the upper 32 bits of the pte, then we run into the danger of the result being dropped if downcast. For example: gather_stats(page, md, pte_dirty(*pte), 1); where pte_dirty(*pte) is downcast to an int. This patch introduces a new macro pte_isset which performs the bitwise and, then performs a double logical invert (where needed) to ensure predictable downcasting. The logical inverse pte_isclear is also introduced. Equivalent pmd functions for Transparent HugePages have also been added. Signed-off-by: Steve Capper --- arch/arm/include/asm/pgtable-3level.h | 10 +++++++--- arch/arm/include/asm/pgtable.h | 18 +++++++++++------- 2 files changed, 18 insertions(+), 10 deletions(-) diff --git a/arch/arm/include/asm/pgtable-3level.h b/arch/arm/include/asm/pgtable-3level.h index 85c60ad..bde49f9 100644 --- a/arch/arm/include/asm/pgtable-3level.h +++ b/arch/arm/include/asm/pgtable-3level.h @@ -207,16 +207,20 @@ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr) #define pte_huge(pte) (pte_val(pte) && !(pte_val(pte) & PTE_TABLE_BIT)) #define pte_mkhuge(pte) (__pte(pte_val(pte) & ~PTE_TABLE_BIT)) -#define pmd_young(pmd) (pmd_val(pmd) & PMD_SECT_AF) +#define pmd_isset(pmd, val) ((u32)(val) == (val) ? pmd_val(pmd) & (val) \ + : !!(pmd_val(pmd) & (val))) +#define pmd_isclear(pmd, val) (!(pmd_val(pmd) & (val))) + +#define pmd_young(pmd) (pmd_isset((pmd), PMD_SECT_AF)) #define __HAVE_ARCH_PMD_WRITE -#define pmd_write(pmd) (!(pmd_val(pmd) & PMD_SECT_RDONLY)) +#define pmd_write(pmd) (pmd_isclear((pmd), PMD_SECT_RDONLY)) #define pmd_hugewillfault(pmd) (!pmd_young(pmd) || !pmd_write(pmd)) #define pmd_thp_or_huge(pmd) (pmd_huge(pmd) || pmd_trans_huge(pmd)) #ifdef CONFIG_TRANSPARENT_HUGEPAGE -#define pmd_trans_huge(pmd) (pmd_val(pmd) && !(pmd_val(pmd) & PMD_TABLE_BIT)) +#define pmd_trans_huge(pmd) (pmd_val(pmd) && pmd_isclear((pmd), PMD_TABLE_BIT)) #define pmd_trans_splitting(pmd) (pmd_val(pmd) & PMD_SECT_SPLITTING) #endif diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h index 5478e5d..01baef0 100644 --- a/arch/arm/include/asm/pgtable.h +++ b/arch/arm/include/asm/pgtable.h @@ -214,18 +214,22 @@ static inline pte_t *pmd_page_vaddr(pmd_t pmd) #define pte_clear(mm,addr,ptep) set_pte_ext(ptep, __pte(0), 0) +#define pte_isset(pte, val) ((u32)(val) == (val) ? pte_val(pte) & (val) \ + : !!(pte_val(pte) & (val))) +#define pte_isclear(pte, val) (!(pte_val(pte) & (val))) + #define pte_none(pte) (!pte_val(pte)) -#define pte_present(pte) (pte_val(pte) & L_PTE_PRESENT) -#define pte_valid(pte) (pte_val(pte) & L_PTE_VALID) +#define pte_present(pte) (pte_isset((pte), L_PTE_PRESENT)) +#define pte_valid(pte) (pte_isset((pte), L_PTE_VALID)) #define pte_accessible(mm, pte) (mm_tlb_flush_pending(mm) ? pte_present(pte) : pte_valid(pte)) -#define pte_write(pte) (!(pte_val(pte) & L_PTE_RDONLY)) -#define pte_dirty(pte) (pte_val(pte) & L_PTE_DIRTY) -#define pte_young(pte) (pte_val(pte) & L_PTE_YOUNG) -#define pte_exec(pte) (!(pte_val(pte) & L_PTE_XN)) +#define pte_write(pte) (pte_isclear((pte), L_PTE_RDONLY)) +#define pte_dirty(pte) (pte_isset((pte), L_PTE_DIRTY)) +#define pte_young(pte) (pte_isset((pte), L_PTE_YOUNG)) +#define pte_exec(pte) (pte_isclear((pte), L_PTE_XN)) #define pte_special(pte) (0) #define pte_valid_user(pte) \ - (pte_valid(pte) && (pte_val(pte) & L_PTE_USER) && pte_young(pte)) + (pte_valid(pte) && pte_isset((pte), L_PTE_USER) && pte_young(pte)) #if __LINUX_ARM_ARCH__ < 6 static inline void __sync_icache_dcache(pte_t pteval)