From patchwork Tue Apr 30 16:30:48 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Capper X-Patchwork-Id: 2505811 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) by patchwork1.kernel.org (Postfix) with ESMTP id 024553FD85 for ; Tue, 30 Apr 2013 16:58:55 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1UXDUy-0007TN-5I; Tue, 30 Apr 2013 16:33:51 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1UXDU4-0005yO-8p; Tue, 30 Apr 2013 16:32:52 +0000 Received: from mail-we0-x234.google.com ([2a00:1450:400c:c03::234]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1UXDSw-0005mE-LH for linux-arm-kernel@lists.infradead.org; Tue, 30 Apr 2013 16:31:45 +0000 Received: by mail-we0-f180.google.com with SMTP id x43so585003wey.11 for ; Tue, 30 Apr 2013 09:31:20 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:from:to:cc:subject:date:message-id:x-mailer:in-reply-to :references:x-gm-message-state; bh=2h2qks/G/010wbXiBohpIaKtYI+s13xahYTiUCwR6IE=; b=C5u2M4hkXJGbPwKZVe+V18brEAxyDBAY/fa9QCxc0CgIKxLm/IEqKmCcTU3RkMFh2N GGgXo0wRZdXTdF24wFt7qLhZPBp4cG9jtMFM+ksEMk+cqqgJJ7jY4LUm/MtEf0pXviX6 I683UB/GPaSM1DO386pwlJsCWfFLYqVidvQOsLY7Oo2tqeVTj4f+Lgr4DVJaZ/Sgofke hvTWXdqrZExf2xsEr7iCqLQ90lnLRKYw6JF/7IMqf9XlMrcfhDVLdPASx3d7Qdldx1AY vjhjCNyvk5fmpbU59ugRZaIM5MOPs4c9ueE+IwAXSaI1tBeWDNOLVfhey2CqZSDDmIEH b0LQ== X-Received: by 10.194.133.130 with SMTP id pc2mr23102760wjb.35.1367339480861; Tue, 30 Apr 2013 09:31:20 -0700 (PDT) Received: from localhost.localdomain (marmot.wormnet.eu. [188.246.204.87]) by mx.google.com with ESMTPSA id k5sm30044708wiy.5.2013.04.30.09.31.20 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Tue, 30 Apr 2013 09:31:20 -0700 (PDT) From: Steve Capper To: linux-mm@kvack.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: [RFC PATCH 9/9] ARM64: mm: THP support. Date: Tue, 30 Apr 2013 17:30:48 +0100 Message-Id: <1367339448-21727-10-git-send-email-steve.capper@linaro.org> X-Mailer: git-send-email 1.7.2.5 In-Reply-To: <1367339448-21727-1-git-send-email-steve.capper@linaro.org> References: <1367339448-21727-1-git-send-email-steve.capper@linaro.org> X-Gm-Message-State: ALoCoQlMcYTBXVPEM71PEikCNfcbcBrS9oLraGv3uzIUPi2Kb9s5gEqBdJJebbAV2JiizjveKtbs X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130430_123144_444549_BF6EDB78 X-CRM114-Status: GOOD ( 12.07 ) X-Spam-Score: -1.9 (-) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-1.9 points) pts rule name description ---- ---------------------- -------------------------------------------------- -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: Steve Capper , Catalin Marinas , Will Deacon , Michal Hocko , Ken Chen , Mel Gorman X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Bring Transparent HugePage support to ARM. The size of a transparent huge page depends on the normal page size. A transparent huge page is always represented as a pmd. If PAGE_SIZE is 4K, THPs are 2MB. If PAGE_SIZE is 64K, THPs are 512MB. Signed-off-by: Steve Capper --- arch/arm64/Kconfig | 3 +++ arch/arm64/include/asm/pgtable-hwdef.h | 1 + arch/arm64/include/asm/pgtable.h | 47 ++++++++++++++++++++++++++++++++++ arch/arm64/include/asm/tlb.h | 6 +++++ arch/arm64/include/asm/tlbflush.h | 2 ++ 5 files changed, 59 insertions(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 908fd95..9356802 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -194,6 +194,9 @@ config ARCH_WANT_GENERAL_HUGETLB config ARCH_WANT_HUGE_PMD_SHARE def_bool y if !ARM64_64K_PAGES +config HAVE_ARCH_TRANSPARENT_HUGEPAGE + def_bool y if MMU + source "mm/Kconfig" config FORCE_MAX_ZONEORDER diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index c3cac68..862acf7 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -35,6 +35,7 @@ /* * Section */ +#define PMD_SECT_RDONLY (_AT(pmdval_t, 1) << 7) /* AP[2] */ #define PMD_SECT_S (_AT(pmdval_t, 3) << 8) #define PMD_SECT_AF (_AT(pmdval_t, 1) << 10) #define PMD_SECT_NG (_AT(pmdval_t, 1) << 11) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 4b7a058..06bfbd6 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -188,6 +188,53 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, #define __HAVE_ARCH_PTE_SPECIAL /* + * Software PMD bits for THP + */ + +#define PMD_SECT_DIRTY (_AT(pmdval_t, 1) << 55) +#define PMD_SECT_SPLITTING (_AT(pmdval_t, 1) << 57) + +/* + * THP definitions. + */ +#define pmd_young(pmd) (pmd_val(pmd) & PMD_SECT_AF) + +#define __HAVE_ARCH_PMD_WRITE +#define pmd_write(pmd) (!(pmd_val(pmd) & PMD_SECT_RDONLY)) + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +#define pmd_trans_huge(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == PMD_TYPE_SECT) +#define pmd_trans_splitting(pmd) (pmd_val(pmd) & PMD_SECT_SPLITTING) +#endif + +#define PMD_BIT_FUNC(fn,op) \ +static inline pmd_t pmd_##fn(pmd_t pmd) { pmd_val(pmd) op; return pmd; } + +PMD_BIT_FUNC(wrprotect, |= PMD_SECT_RDONLY); +PMD_BIT_FUNC(mkold, &= ~PMD_SECT_AF); +PMD_BIT_FUNC(mksplitting, |= PMD_SECT_SPLITTING); +PMD_BIT_FUNC(mkwrite, &= ~PMD_SECT_RDONLY); +PMD_BIT_FUNC(mkdirty, |= PMD_SECT_DIRTY); +PMD_BIT_FUNC(mkyoung, |= PMD_SECT_AF); +PMD_BIT_FUNC(mknotpresent, &= ~PMD_TYPE_MASK); + +#define pmd_mkhuge(pmd) (__pmd((pmd_val(pmd) & ~PMD_TYPE_MASK) | PMD_TYPE_SECT)) + +#define pmd_pfn(pmd) (((pmd_val(pmd) & PMD_MASK) & PHYS_MASK) >> PAGE_SHIFT) +#define pfn_pmd(pfn,prot) (__pmd(((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))) +#define mk_pmd(page,prot) pfn_pmd(page_to_pfn(page),prot) + +#define pmd_page(pmd) pfn_to_page(__phys_to_pfn(pmd_val(pmd) & PHYS_MASK)) + +#define pmd_modify(pmd,newprot) (__pmd(pmd_val(pmd) | pgprot_val(newprot))) +#define set_pmd_at(mm, addr, pmdp, pmd) set_pmd(pmdp, pmd) + +static inline int has_transparent_hugepage(void) +{ + return 1; +} + +/* * Mark the prot value as uncacheable and unbufferable. */ #define pgprot_noncached(prot) \ diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index 654f096..46b3beb 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -187,4 +187,10 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, #define tlb_migrate_finish(mm) do { } while (0) +static inline void +tlb_remove_pmd_tlb_entry(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr) +{ + tlb_add_flush(tlb, addr); +} + #endif diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index 122d632..8b48203 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -117,6 +117,8 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, dsb(); } +#define update_mmu_cache_pmd(vma, address, pmd) do { } while (0) + #endif #endif