From patchwork Wed Aug 3 18:23:08 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yinghai Lu X-Patchwork-Id: 9262057 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id E2E0D6048B for ; Wed, 3 Aug 2016 19:11:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D477B28113 for ; Wed, 3 Aug 2016 19:11:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C91912823D; Wed, 3 Aug 2016 19:11:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id EB6DB28113 for ; Wed, 3 Aug 2016 19:11:01 +0000 (UTC) Received: (qmail 20227 invoked by uid 550); 3 Aug 2016 19:11:00 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: kernel-hardening@lists.openwall.com Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 22356 invoked from network); 3 Aug 2016 18:23:45 -0000 From: Yinghai Lu To: Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" , Kees Cook , "Rafael J . Wysocki" , Pavel Machek Cc: the arch/x86 maintainers , Linux Kernel Mailing List , Linux PM list , kernel-hardening@lists.openwall.com, Thomas Garnier , Yinghai Lu Date: Wed, 3 Aug 2016 11:23:08 -0700 Message-Id: <20160803182308.19227-1-yinghai@kernel.org> X-Mailer: git-send-email 2.8.3 In-Reply-To: References: <1470071280-78706-1-git-send-email-thgarnie@google.com> <1470071280-78706-2-git-send-email-thgarnie@google.com> X-Source-IP: userv0022.oracle.com [156.151.31.74] Subject: [kernel-hardening] [PATCH v2] x86/power/64: Support unaligned addresses for temporary mapping X-Virus-Scanned: ClamAV using ClamSMTP From: Thomas Garnier Correctly setup the temporary mapping for hibernation. Previous implementation assumed the offset between KVA and PA was aligned on the PGD level. With KASLR memory randomization enabled, the offset is randomized on the PUD level. This change supports unaligned up to PMD. Signed-off-by: Thomas Garnier [yinghai: change loop to virtual address] Signed-off-by: Yinghai Lu Acked-by: Rafael J. Wysocki --- arch/x86/mm/ident_map.c | 54 ++++++++++++++++++++++++++++-------------------- 1 file changed, 32 insertions(+), 22 deletions(-) Index: linux-2.6/arch/x86/mm/ident_map.c =================================================================== --- linux-2.6.orig/arch/x86/mm/ident_map.c +++ linux-2.6/arch/x86/mm/ident_map.c @@ -3,40 +3,47 @@ * included by both the compressed kernel and the regular kernel. */ -static void ident_pmd_init(unsigned long pmd_flag, pmd_t *pmd_page, +static void ident_pmd_init(struct x86_mapping_info *info, pmd_t *pmd_page, unsigned long addr, unsigned long end) { - addr &= PMD_MASK; - for (; addr < end; addr += PMD_SIZE) { - pmd_t *pmd = pmd_page + pmd_index(addr); + unsigned long off = info->kernel_mapping ? __PAGE_OFFSET : 0; + unsigned long vaddr = addr + off; + unsigned long vend = end + off; + + vaddr &= PMD_MASK; + for (; vaddr < vend; vaddr += PMD_SIZE) { + pmd_t *pmd = pmd_page + pmd_index(vaddr); if (!pmd_present(*pmd)) - set_pmd(pmd, __pmd(addr | pmd_flag)); + set_pmd(pmd, __pmd((vaddr - off) | info->pmd_flag)); } } static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page, unsigned long addr, unsigned long end) { - unsigned long next; + unsigned long off = info->kernel_mapping ? __PAGE_OFFSET : 0; + unsigned long vaddr = addr + off; + unsigned long vend = end + off; + unsigned long vnext; - for (; addr < end; addr = next) { - pud_t *pud = pud_page + pud_index(addr); + for (; vaddr < vend; vaddr = vnext) { + pud_t *pud = pud_page + pud_index(vaddr); pmd_t *pmd; - next = (addr & PUD_MASK) + PUD_SIZE; - if (next > end) - next = end; + vnext = (vaddr & PUD_MASK) + PUD_SIZE; + if (vnext > vend) + vnext = vend; if (pud_present(*pud)) { pmd = pmd_offset(pud, 0); - ident_pmd_init(info->pmd_flag, pmd, addr, next); + ident_pmd_init(info, pmd, vaddr - off, vnext - off); continue; } pmd = (pmd_t *)info->alloc_pgt_page(info->context); if (!pmd) return -ENOMEM; - ident_pmd_init(info->pmd_flag, pmd, addr, next); + ident_pmd_init(info, pmd, vaddr - off, vnext - off); set_pud(pud, __pud(__pa(pmd) | _KERNPG_TABLE)); } @@ -46,21 +53,24 @@ static int ident_pud_init(struct x86_map int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page, unsigned long addr, unsigned long end) { - unsigned long next; int result; - int off = info->kernel_mapping ? pgd_index(__PAGE_OFFSET) : 0; + unsigned long off = info->kernel_mapping ? __PAGE_OFFSET : 0; + unsigned long vaddr = addr + off; + unsigned long vend = end + off; + unsigned long vnext; - for (; addr < end; addr = next) { - pgd_t *pgd = pgd_page + pgd_index(addr) + off; + for (; vaddr < vend; vaddr = vnext) { + pgd_t *pgd = pgd_page + pgd_index(vaddr); pud_t *pud; - next = (addr & PGDIR_MASK) + PGDIR_SIZE; - if (next > end) - next = end; + vnext = (vaddr & PGDIR_MASK) + PGDIR_SIZE; + if (vnext > vend) + vnext = vend; if (pgd_present(*pgd)) { pud = pud_offset(pgd, 0); - result = ident_pud_init(info, pud, addr, next); + result = ident_pud_init(info, pud, vaddr - off, + vnext - off); if (result) return result; continue; @@ -69,7 +79,7 @@ int kernel_ident_mapping_init(struct x86 pud = (pud_t *)info->alloc_pgt_page(info->context); if (!pud) return -ENOMEM; - result = ident_pud_init(info, pud, addr, next); + result = ident_pud_init(info, pud, vaddr - off, vnext - off); if (result) return result; set_pgd(pgd, __pgd(__pa(pud) | _KERNPG_TABLE));