From patchwork Wed Jan 25 06:37:00 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: AKASHI Takahiro X-Patchwork-Id: 9536465 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B9B40601D7 for ; Wed, 25 Jan 2017 06:38:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 97C2C269E2 for ; Wed, 25 Jan 2017 06:38:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8C25E26E75; Wed, 25 Jan 2017 06:38:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_SORBS_SPAM,T_DKIM_INVALID autolearn=no version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E8D45269E2 for ; Wed, 25 Jan 2017 06:38:02 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1cWHDS-0006uy-A8; Wed, 25 Jan 2017 06:37:58 +0000 Received: from mail-pf0-f182.google.com ([209.85.192.182]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1cWHCg-0006rB-2s for linux-arm-kernel@lists.infradead.org; Wed, 25 Jan 2017 06:37:12 +0000 Received: by mail-pf0-f182.google.com with SMTP id 189so56159956pfu.3 for ; Tue, 24 Jan 2017 22:36:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=date:from:to:cc:subject:message-id:mail-followup-to:references :mime-version:content-disposition:in-reply-to:user-agent; bh=uPXJkU4bqyQdSP2n3duWaD2xlb6zmLEULTNak+UupLM=; b=OJNoTpVle+vPL4YWt6dn8pda1lZLWoduemj0kjLcwe0eQQ0X8OLBCeVTsBkCHNcu4w 5u+zTO2VaDlFrkzsrdMfVKKKQXgiXc9fv9A+Wzso7OIw8063UlxYfnqKjlYlJfg0bTME y4LvyzwKHDC9W1AfViukS0IsifiFeMwHl4oBY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id :mail-followup-to:references:mime-version:content-disposition :in-reply-to:user-agent; bh=uPXJkU4bqyQdSP2n3duWaD2xlb6zmLEULTNak+UupLM=; b=JdRPlMgZkzIKH5qWmqPIjVD74dbW2L43j8kBtukkLoJ15yXmcCJnNIDspLIrvjiQ64 dJwKSyeQJwn7n2mBd4gyi8ZY38UHn+Y7Fudnz1z+zS/okiIGX8QOEbx/2Jsv6vHRiK/f 5DjNK5vn++PmeLgFQYwEI4k9eEQsW7P9MabsNs5bOegYESS5gNOvC+VL7n5E9mBQHw5x jLcXIp3TbgSjdSjxh6GifpzV2MK2Nd3ZfY+eUtzbhVMXaduFRNFTTR2jDmRwXGatuMph bZd6b/ebn3QOxqeGByO6kmBZbvhdPY01wboXU4kRpbISDaAqdFn2Pbo6BJWMimlEza/5 r7OA== X-Gm-Message-State: AIkVDXKJ2gyRhQ8gqO6q2/Tv2B502z1Lo0zQMBmtSndNk97yU6zFiSv7h2OPns/dfDGX3dsM X-Received: by 10.84.151.69 with SMTP id i63mr1145153pli.122.1485326149117; Tue, 24 Jan 2017 22:35:49 -0800 (PST) Received: from linaro.org ([121.95.100.191]) by smtp.googlemail.com with ESMTPSA id h4sm49088960pfk.96.2017.01.24.22.35.45 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Jan 2017 22:35:48 -0800 (PST) Date: Wed, 25 Jan 2017 15:37:00 +0900 From: AKASHI Takahiro To: Pratyush Anand Subject: Re: [PATCH v30 04/11] arm64: mm: allow for unmapping memory region from kernel mapping Message-ID: <20170125063659.GD23406@linaro.org> Mail-Followup-To: AKASHI Takahiro , Pratyush Anand , catalin.marinas@arm.com, will.deacon@arm.com, mark.rutland@arm.com, geoff@infradead.org, kexec@lists.infradead.org, james.morse@arm.com, bauerman@linux.vnet.ibm.com, dyoung@redhat.com, linux-arm-kernel@lists.infradead.org References: <20170124084638.3770-1-takahiro.akashi@linaro.org> <20170124085004.3892-3-takahiro.akashi@linaro.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170124_223710_207438_1B34EFBA X-CRM114-Status: GOOD ( 24.06 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, geoff@infradead.org, catalin.marinas@arm.com, will.deacon@arm.com, james.morse@arm.com, bauerman@linux.vnet.ibm.com, dyoung@redhat.com, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP On Tue, Jan 24, 2017 at 05:02:20PM +0530, Pratyush Anand wrote: > > > On Tuesday 24 January 2017 02:19 PM, AKASHI Takahiro wrote: > >The current implementation of create_mapping_late() is only allowed > >to modify permission attributes (read-only or read-write) against > >the existing kernel mapping. > > > >In this patch, PAGE_KERNEL_INVALID protection attribute is introduced. > >We will now be able to invalidate (or unmap) some part of the existing > >kernel mapping by specifying PAGE_KERNEL_INVALID to create_mapping_late(). > > > >This feature will be used in a suceeding kdump patch to protect > >the memory reserved for crash dump kernel once after loaded. > > > >Signed-off-by: AKASHI Takahiro > >--- > > arch/arm64/include/asm/mmu.h | 2 ++ > > arch/arm64/include/asm/pgtable-hwdef.h | 2 ++ > > arch/arm64/include/asm/pgtable-prot.h | 1 + > > arch/arm64/include/asm/pgtable.h | 4 ++++ > > arch/arm64/mm/mmu.c | 29 ++++++++++++++++++++--------- > > 5 files changed, 29 insertions(+), 9 deletions(-) > > > >diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h > >index 47619411f0ff..a6c1367527bc 100644 > >--- a/arch/arm64/include/asm/mmu.h > >+++ b/arch/arm64/include/asm/mmu.h > >@@ -36,6 +36,8 @@ extern void init_mem_pgprot(void); > > extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys, > > unsigned long virt, phys_addr_t size, > > pgprot_t prot, bool page_mappings_only); > >+extern void create_mapping_late(phys_addr_t phys, unsigned long virt, > >+ phys_addr_t size, pgprot_t prot); > > extern void *fixmap_remap_fdt(phys_addr_t dt_phys); > > > > #endif > >diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h > >index eb0c2bd90de9..e66efec31ca9 100644 > >--- a/arch/arm64/include/asm/pgtable-hwdef.h > >+++ b/arch/arm64/include/asm/pgtable-hwdef.h > >@@ -119,6 +119,7 @@ > > #define PUD_TABLE_BIT (_AT(pgdval_t, 1) << 1) > > #define PUD_TYPE_MASK (_AT(pgdval_t, 3) << 0) > > #define PUD_TYPE_SECT (_AT(pgdval_t, 1) << 0) > >+#define PUD_VALID PUD_TYPE_SECT > > > > /* > > * Level 2 descriptor (PMD). > >@@ -128,6 +129,7 @@ > > #define PMD_TYPE_TABLE (_AT(pmdval_t, 3) << 0) > > #define PMD_TYPE_SECT (_AT(pmdval_t, 1) << 0) > > #define PMD_TABLE_BIT (_AT(pmdval_t, 1) << 1) > >+#define PMD_VALID PMD_TYPE_SECT > > > > /* > > * Section > >diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h > >index 2142c7726e76..945d84cd5df7 100644 > >--- a/arch/arm64/include/asm/pgtable-prot.h > >+++ b/arch/arm64/include/asm/pgtable-prot.h > >@@ -54,6 +54,7 @@ > > #define PAGE_KERNEL_ROX __pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_RDONLY) > > #define PAGE_KERNEL_EXEC __pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE) > > #define PAGE_KERNEL_EXEC_CONT __pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_CONT) > >+#define PAGE_KERNEL_INVALID __pgprot(0) > > > > #define PAGE_HYP __pgprot(_PAGE_DEFAULT | PTE_HYP | PTE_HYP_XN) > > #define PAGE_HYP_EXEC __pgprot(_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY) > >diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h > >index ffbb9a520563..1904a7c07018 100644 > >--- a/arch/arm64/include/asm/pgtable.h > >+++ b/arch/arm64/include/asm/pgtable.h > >@@ -364,6 +364,8 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, > > > > #define pmd_bad(pmd) (!(pmd_val(pmd) & PMD_TABLE_BIT)) > > > >+#define pmd_valid(pmd) (!!(pmd_val(pmd) & PMD_VALID)) > >+ > > #define pmd_table(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == \ > > PMD_TYPE_TABLE) > > #define pmd_sect(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == \ > >@@ -428,6 +430,7 @@ static inline phys_addr_t pmd_page_paddr(pmd_t pmd) > > > > #define pud_none(pud) (!pud_val(pud)) > > #define pud_bad(pud) (!(pud_val(pud) & PUD_TABLE_BIT)) > >+#define pud_valid(pud) (!!(pud_val(pud) & PUD_VALID)) > > This will break compilation for CONFIG_PGTABLE_LEVELS <= 2 Ah, yes. A quick fix is as follows: ===>8=== Now I've confirmed that it compiles under the configuration with * 4KB page x 39, 48-bit address space * 64KB page x 42, 48-bit address space and also verified a crash dump image for 64KB x 42/48b cases. > > #define pud_present(pud) (pud_val(pud)) > > > > static inline void set_pud(pud_t *pudp, pud_t pud) > >@@ -481,6 +484,7 @@ static inline phys_addr_t pud_page_paddr(pud_t pud) > > > > #define pgd_none(pgd) (!pgd_val(pgd)) > > #define pgd_bad(pgd) (!(pgd_val(pgd) & 2)) > >+#define pgd_valid(pgd) (!!(pgd_val(pgd) & 1)) > > This has not been used anywhere. Well, this patch actually also breaks ptdump (debugfs/kernel_page_tables) as a descriptor can be non-zero yet invalid after applying this patch. Once it is accepted, I will post another patch which will fix the issue. pgd_valid() is used in that patch. Thanks, -Takahiro AKASHI > > #define pgd_present(pgd) (pgd_val(pgd)) > > > > static inline void set_pgd(pgd_t *pgdp, pgd_t pgd) > >diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > >index 17243e43184e..9c7adcce8e4e 100644 > >--- a/arch/arm64/mm/mmu.c > >+++ b/arch/arm64/mm/mmu.c > >@@ -133,7 +133,8 @@ static void alloc_init_pte(pmd_t *pmd, unsigned long addr, > > * Set the contiguous bit for the subsequent group of PTEs if > > * its size and alignment are appropriate. > > */ > >- if (((addr | PFN_PHYS(pfn)) & ~CONT_PTE_MASK) == 0) { > >+ if ((pgprot_val(prot) & PTE_VALID) && > >+ (((addr | PFN_PHYS(pfn)) & ~CONT_PTE_MASK) == 0)) { > > if (end - addr >= CONT_PTE_SIZE && !page_mappings_only) > > __prot = __pgprot(pgprot_val(prot) | PTE_CONT); > > else > >@@ -147,7 +148,8 @@ static void alloc_init_pte(pmd_t *pmd, unsigned long addr, > > * After the PTE entry has been populated once, we > > * only allow updates to the permission attributes. > > */ > >- BUG_ON(!pgattr_change_is_safe(pte_val(old_pte), pte_val(*pte))); > >+ BUG_ON(pte_valid(old_pte) && pte_valid(*pte) && > >+ !pgattr_change_is_safe(pte_val(old_pte), pte_val(*pte))); > > > > } while (pte++, addr += PAGE_SIZE, addr != end); > > > >@@ -190,7 +192,8 @@ static void alloc_init_pmd(pud_t *pud, unsigned long addr, unsigned long end, > > * Set the contiguous bit for the subsequent group of > > * PMDs if its size and alignment are appropriate. > > */ > >- if (((addr | phys) & ~CONT_PMD_MASK) == 0) { > >+ if ((pgprot_val(prot) | PMD_VALID) && > >+ ((addr | phys) & ~CONT_PMD_MASK) == 0) { > > if (end - addr >= CONT_PMD_SIZE) > > __prot = __pgprot(pgprot_val(prot) | > > PTE_CONT); > >@@ -203,7 +206,8 @@ static void alloc_init_pmd(pud_t *pud, unsigned long addr, unsigned long end, > > * After the PMD entry has been populated once, we > > * only allow updates to the permission attributes. > > */ > >- BUG_ON(!pgattr_change_is_safe(pmd_val(old_pmd), > >+ BUG_ON(pmd_valid(old_pmd) && pmd_valid(*pmd) && > >+ !pgattr_change_is_safe(pmd_val(old_pmd), > > pmd_val(*pmd))); > > } else { > > alloc_init_pte(pmd, addr, next, __phys_to_pfn(phys), > >@@ -263,7 +267,8 @@ static void alloc_init_pud(pgd_t *pgd, unsigned long addr, unsigned long end, > > * After the PUD entry has been populated once, we > > * only allow updates to the permission attributes. > > */ > >- BUG_ON(!pgattr_change_is_safe(pud_val(old_pud), > >+ BUG_ON(pud_valid(old_pud) && pud_valid(*pud) && > >+ !pgattr_change_is_safe(pud_val(old_pud), > > pud_val(*pud))); > > } else { > > alloc_init_pmd(pud, addr, next, phys, prot, > >@@ -344,8 +349,8 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys, > > pgd_pgtable_alloc, page_mappings_only); > > } > > > >-static void create_mapping_late(phys_addr_t phys, unsigned long virt, > >- phys_addr_t size, pgprot_t prot) > >+void create_mapping_late(phys_addr_t phys, unsigned long virt, > >+ phys_addr_t size, pgprot_t prot) > > { > > if (virt < VMALLOC_START) { > > pr_warn("BUG: not creating mapping for %pa at 0x%016lx - outside kernel range\n", > >@@ -791,14 +796,20 @@ int __init arch_ioremap_pmd_supported(void) > > int pud_set_huge(pud_t *pud, phys_addr_t phys, pgprot_t prot) > > { > > BUG_ON(phys & ~PUD_MASK); > >- set_pud(pud, __pud(phys | PUD_TYPE_SECT | pgprot_val(mk_sect_prot(prot)))); > >+ set_pud(pud, __pud(phys | > >+ ((pgprot_val(prot) & PUD_VALID) ? > >+ PUD_TYPE_SECT : 0) | > >+ pgprot_val(mk_sect_prot(prot)))); > > return 1; > > } > > > > int pmd_set_huge(pmd_t *pmd, phys_addr_t phys, pgprot_t prot) > > { > > BUG_ON(phys & ~PMD_MASK); > >- set_pmd(pmd, __pmd(phys | PMD_TYPE_SECT | pgprot_val(mk_sect_prot(prot)))); > >+ set_pmd(pmd, __pmd(phys | > >+ ((pgprot_val(prot) & PMD_VALID) ? > >+ PMD_TYPE_SECT : 0) | > >+ pgprot_val(mk_sect_prot(prot)))); > > return 1; > > } > > > > > > > ~Pratyush ===8<=== diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 1904a7c07018..dc11d4bf332c 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -467,6 +467,8 @@ static inline phys_addr_t pud_page_paddr(pud_t pud) #else +#define pud_valid(pud) (1) + #define pud_page_paddr(pud) ({ BUILD_BUG(); 0; }) /* Match pmd_offset folding in */ @@ -520,6 +522,8 @@ static inline phys_addr_t pgd_page_paddr(pgd_t pgd) #else +#define pgd_valid(pgd) (1) + #define pgd_page_paddr(pgd) ({ BUILD_BUG(); 0;}) /* Match pud_offset folding in */