From patchwork Wed May 8 09:52:40 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 2538191 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) by patchwork1.kernel.org (Postfix) with ESMTP id CA5EF3FE1F for ; Wed, 8 May 2013 09:58:09 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Ua167-0002AM-5V; Wed, 08 May 2013 09:55:46 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Ua15B-0005FI-O4; Wed, 08 May 2013 09:54:45 +0000 Received: from mail-wg0-x231.google.com ([2a00:1450:400c:c00::231]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Ua146-00052U-Ts for linux-arm-kernel@lists.infradead.org; Wed, 08 May 2013 09:53:48 +0000 Received: by mail-wg0-f49.google.com with SMTP id j13so1609481wgh.28 for ; Wed, 08 May 2013 02:53:20 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:from:to:cc:subject:date:message-id:x-mailer:in-reply-to :references:x-gm-message-state; bh=bj2F2GcWodmWUBM8q3GouqcPw7PCLURwH3FQM02OXOI=; b=GUz5TT3LlJvqscrIWflqV7EBdTvixwyi/1a85R50UULnwYgy+ESuKuevuBxAO0MWVo FhomGHL9mJJUN28DZSM9Zn3er+yF0B8F0HadtWhkWGR+bK5f5Y/95p0TeCEO2+BlJaMP dpzq8St7KYtufdL8YoFEuA2pycolcxURBuVY9fqRSqO0tcSBnTNMuYyDbcpkeGgLuWcW +DjNkQS/g5hVt0jbwD9Gx90kLWlOvyHA1EL5H8xEPOwiB0+wQeaje/P5/AoyPhAu8TP1 l2IOsahAwxoKaUCs/Wm5qhn4PJCZzG3+VTS1Q3pvnxU+1FVLkHOpxIJtIdvjcyRf1e3g tJmw== X-Received: by 10.180.160.134 with SMTP id xk6mr12645192wib.21.1368006800513; Wed, 08 May 2013 02:53:20 -0700 (PDT) Received: from localhost.localdomain (marmot.wormnet.eu. [188.246.204.87]) by mx.google.com with ESMTPSA id m14sm8068040wij.9.2013.05.08.02.53.19 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 08 May 2013 02:53:19 -0700 (PDT) From: Steve Capper To: linux-mm@kvack.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [RFC PATCH v2 08/11] ARM64: mm: Swap PTE_FILE and PTE_PROT_NONE bits. Date: Wed, 8 May 2013 10:52:40 +0100 Message-Id: <1368006763-30774-9-git-send-email-steve.capper@linaro.org> X-Mailer: git-send-email 1.7.2.5 In-Reply-To: <1368006763-30774-1-git-send-email-steve.capper@linaro.org> References: <1368006763-30774-1-git-send-email-steve.capper@linaro.org> X-Gm-Message-State: ALoCoQkFAgxW99R6sWFkrfP60uxLdD9kkzyr57cx4dJsvYdLF5JV7KIH1Tg1ucOh4mdJ4ByOq9R8 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130508_055339_094321_756FE64A X-CRM114-Status: GOOD ( 11.43 ) X-Spam-Score: -1.9 (-) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-1.9 points) pts rule name description ---- ---------------------- -------------------------------------------------- -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: Steve Capper , patches@linaro.org, Catalin Marinas , Will Deacon , Michal Hocko , Ken Chen , Mel Gorman X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Under ARM64, PTEs can be broadly categorised as follows: - Present and valid: Bit #0 is set. The PTE is valid and memory access to the region may fault. - Present and invalid: Bit #0 is clear and bit #1 is set. Represents present memory with PROT_NONE protection. The PTE is an invalid entry, and the user fault handler will raise a SIGSEGV. - Not present (file): Bits #0 and #1 are clear, bit #2 is set. Memory represented has been paged out. The PTE is an invalid entry, and the fault handler will try and re-populate the memory where necessary. Huge PTEs are block descriptors that have bit #1 clear. If we wish to represent PROT_NONE huge PTEs we then run into a problem as there is no way to distinguish between regular and huge PTEs if we set bit #1. As huge PTEs are always present, the meaning of bits #1 and #2 can be swapped for invalid PTEs. This patch swaps the PTE_FILE and PTE_PROT_NONE constants, allowing us to represent PROT_NONE huge PTEs. Signed-off-by: Steve Capper Acked-by: Catalin Marinas --- arch/arm64/include/asm/pgtable.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index b1a1b59..e245260 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -25,8 +25,8 @@ * Software defined PTE bits definition. */ #define PTE_VALID (_AT(pteval_t, 1) << 0) -#define PTE_PROT_NONE (_AT(pteval_t, 1) << 1) /* only when !PTE_VALID */ -#define PTE_FILE (_AT(pteval_t, 1) << 2) /* only when !pte_present() */ +#define PTE_FILE (_AT(pteval_t, 1) << 1) /* only when !pte_present() */ +#define PTE_PROT_NONE (_AT(pteval_t, 1) << 2) /* only when !PTE_VALID */ #define PTE_DIRTY (_AT(pteval_t, 1) << 55) #define PTE_SPECIAL (_AT(pteval_t, 1) << 56) @@ -306,8 +306,8 @@ extern pgd_t idmap_pg_dir[PTRS_PER_PGD]; /* * Encode and decode a file entry: - * bits 0-1: present (must be zero) - * bit 2: PTE_FILE + * bits 0 & 2: present (must be zero) + * bit 1: PTE_FILE * bits 3-63: file offset / PAGE_SIZE */ #define pte_file(pte) (pte_val(pte) & PTE_FILE)