diff mbox

[PATCHv2] arm64:mm: free the useless initial page table

Message ID 1418110007-13270-1-git-send-email-zhichang.yuan@linaro.org (mailing list archive)
State New, archived
Headers show

Commit Message

zhichang.yuan@linaro.org Dec. 9, 2014, 7:26 a.m. UTC
From: "zhichang.yuan" <zhichang.yuan@linaro.org>

For 64K page system, after mapping a PMD section, the corresponding initial
page table is not needed any more. That page can be freed.

Changes since v1:

* make consistent code between alloc_init_pmd and alloc_init_pud;
* flush the TLB before the unused page table is freed;

Signed-off-by: Zhichang Yuan <zhichang.yuan@linaro.org>
---
 arch/arm64/include/asm/pgtable.h |    3 +++
 arch/arm64/mm/mmu.c              |   15 ++++++++++++---
 2 files changed, 15 insertions(+), 3 deletions(-)

Comments

Catalin Marinas Jan. 23, 2015, 4:21 p.m. UTC | #1
On Tue, Dec 09, 2014 at 07:26:47AM +0000, zhichang.yuan@linaro.org wrote:
> From: "zhichang.yuan" <zhichang.yuan@linaro.org>
> 
> For 64K page system, after mapping a PMD section, the corresponding initial
> page table is not needed any more. That page can be freed.
> 
> Changes since v1:
> 
> * make consistent code between alloc_init_pmd and alloc_init_pud;
> * flush the TLB before the unused page table is freed;
> 
> Signed-off-by: Zhichang Yuan <zhichang.yuan@linaro.org>

I thought about queuing this patch but I realised that
alloc_init_pmd/pud may be called in a late context where memblock_free()
would no longer make sense. Cc'ing Laura and Ard for any ideas here but
I think we may just end up with a few old page table pages sitting
around (not many though). In general we don't go from smaller to larger
mappings (that's what this patch targets) but given the API, I'm not
sure we have any guarantee.

One solution would be to check for alloc == early_alloc or something
similar. Cc'ing Laura and Ard as they added the create_mapping_late()
code.

> ---
>  arch/arm64/include/asm/pgtable.h |    3 +++
>  arch/arm64/mm/mmu.c              |   15 ++++++++++++---
>  2 files changed, 15 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index 41a43bf..8a135b6 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -337,9 +337,12 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
>  
>  #ifdef CONFIG_ARM64_64K_PAGES
>  #define pud_sect(pud)		(0)
> +#define pud_table(pud)		(1)
>  #else
>  #define pud_sect(pud)		((pud_val(pud) & PUD_TYPE_MASK) == \
>  				 PUD_TYPE_SECT)
> +#define pud_table(pud)		((pud_val(pud) & PUD_TYPE_MASK) == \
> +					PUD_TYPE_TABLE)
>  #endif
>  
>  static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index f4f8b50..515f75b 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -191,8 +191,14 @@ static void __init alloc_init_pmd(pud_t *pud, unsigned long addr,
>  			 * Check for previous table entries created during
>  			 * boot (__create_page_tables) and flush them.
>  			 */
> -			if (!pmd_none(old_pmd))
> +			if (!pmd_none(old_pmd)) {
>  				flush_tlb_all();
> +				if (pmd_table(old_pmd)) {
> +					phys_addr_t table = __pa(pte_offset_map(&old_pmd, 0));
> +
> +					memblock_free(table, PAGE_SIZE);
> +				}
> +			}
>  		} else {
>  			alloc_init_pte(pmd, addr, next, __phys_to_pfn(phys),
>  				       prot_pte);
> @@ -234,9 +240,12 @@ static void __init alloc_init_pud(pgd_t *pgd, unsigned long addr,
>  			 * Look up the old pmd table and free it.
>  			 */
>  			if (!pud_none(old_pud)) {
> -				phys_addr_t table = __pa(pmd_offset(&old_pud, 0));
> -				memblock_free(table, PAGE_SIZE);
>  				flush_tlb_all();
> +				if (pud_table(old_pud)) {
> +					phys_addr_t table = __pa(pmd_offset(&old_pud, 0));
> +
> +					memblock_free(table, PAGE_SIZE);
> +				}
>  			}
>  		} else {
>  			alloc_init_pmd(pud, addr, next, phys, map_io);
> -- 
> 1.7.9.5
Ard Biesheuvel Jan. 23, 2015, 5:40 p.m. UTC | #2
On 23 January 2015 at 16:21, Catalin Marinas <catalin.marinas@arm.com> wrote:
> On Tue, Dec 09, 2014 at 07:26:47AM +0000, zhichang.yuan@linaro.org wrote:
>> From: "zhichang.yuan" <zhichang.yuan@linaro.org>
>>
>> For 64K page system, after mapping a PMD section, the corresponding initial
>> page table is not needed any more. That page can be freed.
>>
>> Changes since v1:
>>
>> * make consistent code between alloc_init_pmd and alloc_init_pud;
>> * flush the TLB before the unused page table is freed;
>>
>> Signed-off-by: Zhichang Yuan <zhichang.yuan@linaro.org>
>
> I thought about queuing this patch but I realised that
> alloc_init_pmd/pud may be called in a late context where memblock_free()
> would no longer make sense. Cc'ing Laura and Ard for any ideas here but
> I think we may just end up with a few old page table pages sitting
> around (not many though). In general we don't go from smaller to larger
> mappings (that's what this patch targets) but given the API, I'm not
> sure we have any guarantee.
>
> One solution would be to check for alloc == early_alloc or something
> similar. Cc'ing Laura and Ard as they added the create_mapping_late()
> code.
>

The UEFI page tables are only built up once, based on a series of
disjoint memory regions, so that will never hit either of the
memblock_free() branches.
And AFAICT, the DEBUG_RODATA code does the splitting early, which
causes the create_mapping_late() code to only change permissions, not
change the granularity of any regions.

Perhaps it's sufficient to add a comment and a BUG_ON(alloc !=
early_alloc) to the memblock_free() branches?
Catalin Marinas Jan. 28, 2015, 12:10 p.m. UTC | #3
On Fri, Jan 23, 2015 at 05:40:40PM +0000, Ard Biesheuvel wrote:
> On 23 January 2015 at 16:21, Catalin Marinas <catalin.marinas@arm.com> wrote:
> > On Tue, Dec 09, 2014 at 07:26:47AM +0000, zhichang.yuan@linaro.org wrote:
> >> From: "zhichang.yuan" <zhichang.yuan@linaro.org>
> >>
> >> For 64K page system, after mapping a PMD section, the corresponding initial
> >> page table is not needed any more. That page can be freed.
> >>
> >> Changes since v1:
> >>
> >> * make consistent code between alloc_init_pmd and alloc_init_pud;
> >> * flush the TLB before the unused page table is freed;
> >>
> >> Signed-off-by: Zhichang Yuan <zhichang.yuan@linaro.org>
> >
> > I thought about queuing this patch but I realised that
> > alloc_init_pmd/pud may be called in a late context where memblock_free()
> > would no longer make sense. Cc'ing Laura and Ard for any ideas here but
> > I think we may just end up with a few old page table pages sitting
> > around (not many though). In general we don't go from smaller to larger
> > mappings (that's what this patch targets) but given the API, I'm not
> > sure we have any guarantee.
> >
> > One solution would be to check for alloc == early_alloc or something
> > similar. Cc'ing Laura and Ard as they added the create_mapping_late()
> > code.
> >
> 
> The UEFI page tables are only built up once, based on a series of
> disjoint memory regions, so that will never hit either of the
> memblock_free() branches.
> And AFAICT, the DEBUG_RODATA code does the splitting early, which
> causes the create_mapping_late() code to only change permissions, not
> change the granularity of any regions.
> 
> Perhaps it's sufficient to add a comment and a BUG_ON(alloc !=
> early_alloc) to the memblock_free() branches?

Thanks for confirming, I merged this patch together with BUG_ON(), just
in case.
diff mbox

Patch

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 41a43bf..8a135b6 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -337,9 +337,12 @@  extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
 
 #ifdef CONFIG_ARM64_64K_PAGES
 #define pud_sect(pud)		(0)
+#define pud_table(pud)		(1)
 #else
 #define pud_sect(pud)		((pud_val(pud) & PUD_TYPE_MASK) == \
 				 PUD_TYPE_SECT)
+#define pud_table(pud)		((pud_val(pud) & PUD_TYPE_MASK) == \
+					PUD_TYPE_TABLE)
 #endif
 
 static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index f4f8b50..515f75b 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -191,8 +191,14 @@  static void __init alloc_init_pmd(pud_t *pud, unsigned long addr,
 			 * Check for previous table entries created during
 			 * boot (__create_page_tables) and flush them.
 			 */
-			if (!pmd_none(old_pmd))
+			if (!pmd_none(old_pmd)) {
 				flush_tlb_all();
+				if (pmd_table(old_pmd)) {
+					phys_addr_t table = __pa(pte_offset_map(&old_pmd, 0));
+
+					memblock_free(table, PAGE_SIZE);
+				}
+			}
 		} else {
 			alloc_init_pte(pmd, addr, next, __phys_to_pfn(phys),
 				       prot_pte);
@@ -234,9 +240,12 @@  static void __init alloc_init_pud(pgd_t *pgd, unsigned long addr,
 			 * Look up the old pmd table and free it.
 			 */
 			if (!pud_none(old_pud)) {
-				phys_addr_t table = __pa(pmd_offset(&old_pud, 0));
-				memblock_free(table, PAGE_SIZE);
 				flush_tlb_all();
+				if (pud_table(old_pud)) {
+					phys_addr_t table = __pa(pmd_offset(&old_pud, 0));
+
+					memblock_free(table, PAGE_SIZE);
+				}
 			}
 		} else {
 			alloc_init_pmd(pud, addr, next, phys, map_io);