From patchwork Tue Dec 9 07:26:47 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhichang.yuan@linaro.org X-Patchwork-Id: 5460681 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 209FC9F1D4 for ; Tue, 9 Dec 2014 07:30:35 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3BD772015A for ; Tue, 9 Dec 2014 07:30:34 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 70B2920155 for ; Tue, 9 Dec 2014 07:30:33 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XyFCj-0006Uv-2E; Tue, 09 Dec 2014 07:27:29 +0000 Received: from mail-pd0-f169.google.com ([209.85.192.169]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1XyFCg-0006Sj-Hy for linux-arm-kernel@lists.infradead.org; Tue, 09 Dec 2014 07:27:27 +0000 Received: by mail-pd0-f169.google.com with SMTP id z10so36041pdj.0 for ; Mon, 08 Dec 2014 23:27:04 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=FE7gkZ05xF05pK8KH6gJtSKX1nj+c3qwC6THs7TK5Zk=; b=FoKTeKLdMNLuvOcaE/w8DP0bjUD15Qcll7CMiNw9SvgEPR+vIshfUM39TtadRdEP/1 +MvjI1GDvZZ0j3fJTtmefzJPgoz2gzB7jQSNRMI4n3KtI9HUyt6udOL7fOEjfS+UT58I 1qJJTGqOgJZJdLbB4gwrmtuYWxoWa8RtTYEjbccqrNN5iGOgLFKx+NhXO3+uq0GrJOid +Oc4kCAoJgsJfGNN3fqJmN9mWGFMhebOpkI/yZ4CGnfJzXu3z+Q0E7A5sRxdB/rldIm2 MZZKEmDZyQ7FeyiCMKBWuPznZDugmnwxP3CMqKnxajIOlh/EDgO/YtBbOduKoy43KzO6 rDwg== X-Gm-Message-State: ALoCoQkOHAxvZisToP5LNr+U21rHMx9hgnGblS1CULXiMUDSVE1mYiNPoQjRT+DT4GECIkaAilAj X-Received: by 10.68.111.37 with SMTP id if5mr2724150pbb.140.1418110024331; Mon, 08 Dec 2014 23:27:04 -0800 (PST) Received: from localhost ([58.251.159.252]) by mx.google.com with ESMTPSA id ay4sm575625pbb.29.2014.12.08.23.27.01 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Mon, 08 Dec 2014 23:27:03 -0800 (PST) From: zhichang.yuan@linaro.org To: Catalin.Marinas@arm.com, will.deacon@arm.com Subject: [PATCHv2] arm64:mm: free the useless initial page table Date: Tue, 9 Dec 2014 15:26:47 +0800 Message-Id: <1418110007-13270-1-git-send-email-zhichang.yuan@linaro.org> X-Mailer: git-send-email 1.7.9.5 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141208_232726_643891_23F7D3B8 X-CRM114-Status: GOOD ( 12.03 ) X-Spam-Score: -0.7 (/) Cc: linaro-kernel@lists.linaro.org, liguozhu@huawei.com, linux-kernel@vger.kernel.org, "zhichang.yuan" , dsaxena@linaro.org, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: "zhichang.yuan" For 64K page system, after mapping a PMD section, the corresponding initial page table is not needed any more. That page can be freed. Changes since v1: * make consistent code between alloc_init_pmd and alloc_init_pud; * flush the TLB before the unused page table is freed; Signed-off-by: Zhichang Yuan --- arch/arm64/include/asm/pgtable.h | 3 +++ arch/arm64/mm/mmu.c | 15 ++++++++++++--- 2 files changed, 15 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 41a43bf..8a135b6 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -337,9 +337,12 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, #ifdef CONFIG_ARM64_64K_PAGES #define pud_sect(pud) (0) +#define pud_table(pud) (1) #else #define pud_sect(pud) ((pud_val(pud) & PUD_TYPE_MASK) == \ PUD_TYPE_SECT) +#define pud_table(pud) ((pud_val(pud) & PUD_TYPE_MASK) == \ + PUD_TYPE_TABLE) #endif static inline void set_pmd(pmd_t *pmdp, pmd_t pmd) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index f4f8b50..515f75b 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -191,8 +191,14 @@ static void __init alloc_init_pmd(pud_t *pud, unsigned long addr, * Check for previous table entries created during * boot (__create_page_tables) and flush them. */ - if (!pmd_none(old_pmd)) + if (!pmd_none(old_pmd)) { flush_tlb_all(); + if (pmd_table(old_pmd)) { + phys_addr_t table = __pa(pte_offset_map(&old_pmd, 0)); + + memblock_free(table, PAGE_SIZE); + } + } } else { alloc_init_pte(pmd, addr, next, __phys_to_pfn(phys), prot_pte); @@ -234,9 +240,12 @@ static void __init alloc_init_pud(pgd_t *pgd, unsigned long addr, * Look up the old pmd table and free it. */ if (!pud_none(old_pud)) { - phys_addr_t table = __pa(pmd_offset(&old_pud, 0)); - memblock_free(table, PAGE_SIZE); flush_tlb_all(); + if (pud_table(old_pud)) { + phys_addr_t table = __pa(pmd_offset(&old_pud, 0)); + + memblock_free(table, PAGE_SIZE); + } } } else { alloc_init_pmd(pud, addr, next, phys, map_io);