From patchwork Wed Aug 3 18:23:08 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yinghai Lu X-Patchwork-Id: 9261771 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 41B166048B for ; Wed, 3 Aug 2016 18:26:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3591C28236 for ; Wed, 3 Aug 2016 18:26:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2A379282EC; Wed, 3 Aug 2016 18:26:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0593528236 for ; Wed, 3 Aug 2016 18:26:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752016AbcHCS0a (ORCPT ); Wed, 3 Aug 2016 14:26:30 -0400 Received: from userp1040.oracle.com ([156.151.31.81]:20919 "EHLO userp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751460AbcHCS0a (ORCPT ); Wed, 3 Aug 2016 14:26:30 -0400 Received: from userv0022.oracle.com (userv0022.oracle.com [156.151.31.74]) by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id u73INFSw022549 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 3 Aug 2016 18:23:16 GMT Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by userv0022.oracle.com (8.14.4/8.13.8) with ESMTP id u73INFBt021163 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 3 Aug 2016 18:23:15 GMT Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id u73INENW014776; Wed, 3 Aug 2016 18:23:14 GMT Received: from userv0021.oracle.com (/10.132.126.127) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Wed, 03 Aug 2016 11:23:14 -0700 From: Yinghai Lu To: Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" , Kees Cook , "Rafael J . Wysocki" , Pavel Machek Cc: the arch/x86 maintainers , Linux Kernel Mailing List , Linux PM list , kernel-hardening@lists.openwall.com, Thomas Garnier , Yinghai Lu Subject: [PATCH v2] x86/power/64: Support unaligned addresses for temporary mapping Date: Wed, 3 Aug 2016 11:23:08 -0700 Message-Id: <20160803182308.19227-1-yinghai@kernel.org> X-Mailer: git-send-email 2.8.3 In-Reply-To: References: <1470071280-78706-1-git-send-email-thgarnie@google.com> <1470071280-78706-2-git-send-email-thgarnie@google.com> X-Source-IP: userv0022.oracle.com [156.151.31.74] Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Thomas Garnier Correctly setup the temporary mapping for hibernation. Previous implementation assumed the offset between KVA and PA was aligned on the PGD level. With KASLR memory randomization enabled, the offset is randomized on the PUD level. This change supports unaligned up to PMD. Signed-off-by: Thomas Garnier [yinghai: change loop to virtual address] Signed-off-by: Yinghai Lu Acked-by: Rafael J. Wysocki --- arch/x86/mm/ident_map.c | 54 ++++++++++++++++++++++++++++-------------------- 1 file changed, 32 insertions(+), 22 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-pm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Index: linux-2.6/arch/x86/mm/ident_map.c =================================================================== --- linux-2.6.orig/arch/x86/mm/ident_map.c +++ linux-2.6/arch/x86/mm/ident_map.c @@ -3,40 +3,47 @@ * included by both the compressed kernel and the regular kernel. */ -static void ident_pmd_init(unsigned long pmd_flag, pmd_t *pmd_page, +static void ident_pmd_init(struct x86_mapping_info *info, pmd_t *pmd_page, unsigned long addr, unsigned long end) { - addr &= PMD_MASK; - for (; addr < end; addr += PMD_SIZE) { - pmd_t *pmd = pmd_page + pmd_index(addr); + unsigned long off = info->kernel_mapping ? __PAGE_OFFSET : 0; + unsigned long vaddr = addr + off; + unsigned long vend = end + off; + + vaddr &= PMD_MASK; + for (; vaddr < vend; vaddr += PMD_SIZE) { + pmd_t *pmd = pmd_page + pmd_index(vaddr); if (!pmd_present(*pmd)) - set_pmd(pmd, __pmd(addr | pmd_flag)); + set_pmd(pmd, __pmd((vaddr - off) | info->pmd_flag)); } } static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page, unsigned long addr, unsigned long end) { - unsigned long next; + unsigned long off = info->kernel_mapping ? __PAGE_OFFSET : 0; + unsigned long vaddr = addr + off; + unsigned long vend = end + off; + unsigned long vnext; - for (; addr < end; addr = next) { - pud_t *pud = pud_page + pud_index(addr); + for (; vaddr < vend; vaddr = vnext) { + pud_t *pud = pud_page + pud_index(vaddr); pmd_t *pmd; - next = (addr & PUD_MASK) + PUD_SIZE; - if (next > end) - next = end; + vnext = (vaddr & PUD_MASK) + PUD_SIZE; + if (vnext > vend) + vnext = vend; if (pud_present(*pud)) { pmd = pmd_offset(pud, 0); - ident_pmd_init(info->pmd_flag, pmd, addr, next); + ident_pmd_init(info, pmd, vaddr - off, vnext - off); continue; } pmd = (pmd_t *)info->alloc_pgt_page(info->context); if (!pmd) return -ENOMEM; - ident_pmd_init(info->pmd_flag, pmd, addr, next); + ident_pmd_init(info, pmd, vaddr - off, vnext - off); set_pud(pud, __pud(__pa(pmd) | _KERNPG_TABLE)); } @@ -46,21 +53,24 @@ static int ident_pud_init(struct x86_map int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page, unsigned long addr, unsigned long end) { - unsigned long next; int result; - int off = info->kernel_mapping ? pgd_index(__PAGE_OFFSET) : 0; + unsigned long off = info->kernel_mapping ? __PAGE_OFFSET : 0; + unsigned long vaddr = addr + off; + unsigned long vend = end + off; + unsigned long vnext; - for (; addr < end; addr = next) { - pgd_t *pgd = pgd_page + pgd_index(addr) + off; + for (; vaddr < vend; vaddr = vnext) { + pgd_t *pgd = pgd_page + pgd_index(vaddr); pud_t *pud; - next = (addr & PGDIR_MASK) + PGDIR_SIZE; - if (next > end) - next = end; + vnext = (vaddr & PGDIR_MASK) + PGDIR_SIZE; + if (vnext > vend) + vnext = vend; if (pgd_present(*pgd)) { pud = pud_offset(pgd, 0); - result = ident_pud_init(info, pud, addr, next); + result = ident_pud_init(info, pud, vaddr - off, + vnext - off); if (result) return result; continue; @@ -69,7 +79,7 @@ int kernel_ident_mapping_init(struct x86 pud = (pud_t *)info->alloc_pgt_page(info->context); if (!pud) return -ENOMEM; - result = ident_pud_init(info, pud, addr, next); + result = ident_pud_init(info, pud, vaddr - off, vnext - off); if (result) return result; set_pgd(pgd, __pgd(__pa(pud) | _KERNPG_TABLE));