From patchwork Mon Jan 30 08:42:33 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: AKASHI Takahiro X-Patchwork-Id: 9544537 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 498D860415 for ; Mon, 30 Jan 2017 08:43:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 37E8227E5A for ; Mon, 30 Jan 2017 08:43:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 29A1227F0B; Mon, 30 Jan 2017 08:43:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_SORBS_SPAM,T_DKIM_INVALID autolearn=no version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id ABE6627E5A for ; Mon, 30 Jan 2017 08:43:19 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1cY7YS-0005fj-4I; Mon, 30 Jan 2017 08:43:16 +0000 Received: from mail-pg0-f53.google.com ([74.125.83.53]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1cY7Xg-00052z-RE for linux-arm-kernel@lists.infradead.org; Mon, 30 Jan 2017 08:42:30 +0000 Received: by mail-pg0-f53.google.com with SMTP id 194so99504115pgd.2 for ; Mon, 30 Jan 2017 00:42:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=date:from:to:cc:subject:message-id:mail-followup-to:references :mime-version:content-disposition:in-reply-to:user-agent; bh=qcAt/5huRmwGlCrqgL15CMyErTAejLIVU2ilnCfK2Iw=; b=Vp2hhC34aLAFfXu+soneiXosuIYIhTWXmy8BQHUxZu4WNnd++BFxZLSTuWjJZdwquQ gcD4vYowt3FuV7kBQiaRd4exZqCXkDgLqR6H/D/fUBPl+G4QpwOhoMxqOcNO1OnAUZJA KoJveJbsKfGRlNbJQaxlBGTXVH2dxkOdqQrM4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id :mail-followup-to:references:mime-version:content-disposition :in-reply-to:user-agent; bh=qcAt/5huRmwGlCrqgL15CMyErTAejLIVU2ilnCfK2Iw=; b=Y1f4h2g0vn8fHUxBlBvJDZa7MitPwy9YAbfHohdzi+eQ1sQiAwJrYX3UgCllOSUzHs xskl0P2jx06fVazCllyTu3lp/os8pPVLvjw9Crsme2w7aEURa1Ki0X5L5Q1Ea/BxVAl/ rE8X1kZmGfd75krcFWAPT432SRzBAH13rlVOPRJXieVSKrorzjrFPMnApm6julU0yc/q OhRPIP3eB9T0+OgFoDL7HTxFoH7vmUAZqC05yXqRyvsOThu1VrPaaArDu+oShW33EiqO 7CJgiWq710r+ighHv/NrBYd+X7cEdP+uDpFhzdaSGPUh7+Y4yatn7e4uk1mzzuSFj/C6 CCFw== X-Gm-Message-State: AIkVDXJhyRUsfdwv+W8YKXfozBOJFbXKD6shTwVS3Kc2W6mKVkjUz+FMXJdwMBS8HycsjaBA X-Received: by 10.99.51.76 with SMTP id z73mr22972909pgz.137.1485765667691; Mon, 30 Jan 2017 00:41:07 -0800 (PST) Received: from linaro.org ([121.95.100.191]) by smtp.googlemail.com with ESMTPSA id b83sm30205651pfe.12.2017.01.30.00.41.05 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Jan 2017 00:41:07 -0800 (PST) Date: Mon, 30 Jan 2017 17:42:33 +0900 From: AKASHI Takahiro To: Mark Rutland Subject: Re: [PATCH v30 05/11] arm64: kdump: protect crash dump kernel memory Message-ID: <20170130084232.GI23406@linaro.org> Mail-Followup-To: AKASHI Takahiro , Mark Rutland , James Morse , catalin.marinas@arm.com, will.deacon@arm.com, geoff@infradead.org, bauerman@linux.vnet.ibm.com, dyoung@redhat.com, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org References: <20170124084638.3770-1-takahiro.akashi@linaro.org> <20170124085004.3892-4-takahiro.akashi@linaro.org> <5888E262.4050208@arm.com> <20170126112811.GG23406@linaro.org> <588B2CC4.70904@arm.com> <20170127171513.GA3119@fireball> <20170127185612.GA31485@leverpostej> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20170127185612.GA31485@leverpostej> User-Agent: Mutt/1.5.24 (2015-08-30) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170130_004228_909761_33D9994C X-CRM114-Status: GOOD ( 24.55 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: geoff@infradead.org, catalin.marinas@arm.com, will.deacon@arm.com, James Morse , bauerman@linux.vnet.ibm.com, dyoung@redhat.com, kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Mark, On Fri, Jan 27, 2017 at 06:56:13PM +0000, Mark Rutland wrote: > On Sat, Jan 28, 2017 at 02:15:16AM +0900, AKASHI Takahiro wrote: > > On Fri, Jan 27, 2017 at 11:19:32AM +0000, James Morse wrote: > > > Hi Akashi, > > > > > > On 26/01/17 11:28, AKASHI Takahiro wrote: > > > > On Wed, Jan 25, 2017 at 05:37:38PM +0000, James Morse wrote: > > > >> On 24/01/17 08:49, AKASHI Takahiro wrote: > > > >>> To protect the memory reserved for crash dump kernel once after loaded, > > > >>> arch_kexec_protect_crashres/unprotect_crashres() are meant to deal with > > > >>> permissions of the corresponding kernel mappings. > > > >>> > > > >>> We also have to > > > >>> - put the region in an isolated mapping, and > > > >>> - move copying kexec's control_code_page to machine_kexec_prepare() > > > >>> so that the region will be completely read-only after loading. > > > >> > > > >> > > > >>> Note that the region must reside in linear mapping and have corresponding > > > >>> page structures in order to be potentially freed by shrinking it through > > > >>> /sys/kernel/kexec_crash_size. > > Ah; I did not realise that this was a possibility. > > > Now I understand why we should stick with page_mapping_only option. > > Likewise, I now agree. > > Apologies for guiding you down the wrong path here. Your comments are always welcome. Anyhow, I think we'd better have a dedicated function of unmapping. Can you please take a look at the following hack? (We need to carefully use this function except for kdump usage since it doesn't care whether the region to be unmapped is used somewhere else.) Thanks, -Takahiro AKASHI ===>8=== > Thanks, > Mark. > IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you. ===8<=== diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h index 2142c7726e76..945d84cd5df7 100644 --- a/arch/arm64/include/asm/pgtable-prot.h +++ b/arch/arm64/include/asm/pgtable-prot.h @@ -54,6 +54,7 @@ #define PAGE_KERNEL_ROX __pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_RDONLY) #define PAGE_KERNEL_EXEC __pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE) #define PAGE_KERNEL_EXEC_CONT __pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_CONT) +#define PAGE_KERNEL_INVALID __pgprot(0) #define PAGE_HYP __pgprot(_PAGE_DEFAULT | PTE_HYP | PTE_HYP_XN) #define PAGE_HYP_EXEC __pgprot(_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 17243e43184e..81173b594195 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -307,6 +307,101 @@ static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys, } while (pgd++, addr = next, addr != end); } +static void free_pte(pmd_t *pmd, unsigned long addr, unsigned long end, + bool dealloc_table) +{ + pte_t *pte; + bool do_free = (dealloc_table && ((end - addr) == PMD_SIZE)); + + BUG_ON(pmd_none(*pmd) || pmd_bad(*pmd)); + + pte = pte_set_fixmap_offset(pmd, addr); + do { + pte_clear(NULL, NULL, pte); + } while (pte++, addr += PAGE_SIZE, addr != end); + + pte_clear_fixmap(); + + if (do_free) { + __free_page(pmd_page(*pmd)); + pmd_clear(pmd); + } +} + +static void free_pmd(pud_t *pud, unsigned long addr, unsigned long end, + bool dealloc_table) +{ + pmd_t *pmd; + unsigned long next; + bool do_free = (dealloc_table && ((end - addr) == PUD_SIZE)); + + BUG_ON(pud_none(*pud) || pud_bad(*pud)); + + pmd = pmd_set_fixmap_offset(pud, addr); + + do { + next = pmd_addr_end(addr, end); + + if (pmd_table(*pmd)) { + free_pte(pmd, addr, next, dealloc_table); + } else { + pmd_clear(pmd); + } + } while (pmd++, addr = next, addr != end); + + pmd_clear_fixmap(); + + if (do_free) { + __free_page(pud_page(*pud)); + pud_clear(pud); + } +} + +static void free_pud(pgd_t *pgd, unsigned long addr, unsigned long end, + bool dealloc_table) +{ + pud_t *pud; + unsigned long next; + bool do_free = (dealloc_table && ((end - addr) == PGDIR_SIZE)); + + BUG_ON(pgd_none(*pgd) || pgd_bad(*pgd)); + + pud = pud_set_fixmap_offset(pgd, addr); + + do { + next = pud_addr_end(addr, end); + + if (pud_table(*pud)) { + free_pmd(pud, addr, next, dealloc_table); + } else { + pud_clear(pud); + } + } while (pud++, addr = next, addr != end); + + pud_clear_fixmap(); + + if (do_free) { + __free_page(pgd_page(*pgd)); + pgd_clear(pgd); + } +} + +static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long virt, + phys_addr_t size, bool dealloc_table) +{ + unsigned long addr, length, end, next; + pgd_t *pgd = pgd_offset_raw(pgdir, virt); + + addr = virt & PAGE_MASK; + length = PAGE_ALIGN(size + (virt & ~PAGE_MASK)); + + end = addr + length; + do { + next = pgd_addr_end(addr, end); + free_pud(pgd, addr, next, dealloc_table); + } while (pgd++, addr = next, addr != end); +} + static phys_addr_t pgd_pgtable_alloc(void) { void *ptr = (void *)__get_free_page(PGALLOC_GFP); @@ -334,14 +429,15 @@ static void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt, __create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL, false); } -void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys, - unsigned long virt, phys_addr_t size, - pgprot_t prot, bool page_mappings_only) +void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys, + unsigned long virt, phys_addr_t size, + pgprot_t prot, bool page_mappings_only) { - BUG_ON(mm == &init_mm); - - __create_pgd_mapping(mm->pgd, phys, virt, size, prot, - pgd_pgtable_alloc, page_mappings_only); + if (pgprot_val(prot)) + __create_pgd_mapping(mm->pgd, phys, virt, size, prot, + pgd_pgtable_alloc, page_mappings_only); + else + __remove_pgd_mapping(mm->pgd, virt, size, true); } static void create_mapping_late(phys_addr_t phys, unsigned long virt,