From patchwork Mon Aug 8 07:23:33 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yinghai Lu X-Patchwork-Id: 9266865 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6C3C060839 for ; Mon, 8 Aug 2016 07:23:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5A80827B81 for ; Mon, 8 Aug 2016 07:23:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4E73B27DF9; Mon, 8 Aug 2016 07:23:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 47B3427B81 for ; Mon, 8 Aug 2016 07:23:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751259AbcHHHXh (ORCPT ); Mon, 8 Aug 2016 03:23:37 -0400 Received: from mail-ua0-f193.google.com ([209.85.217.193]:36604 "EHLO mail-ua0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750846AbcHHHXg (ORCPT ); Mon, 8 Aug 2016 03:23:36 -0400 Received: by mail-ua0-f193.google.com with SMTP id 74so6847087uau.3; Mon, 08 Aug 2016 00:23:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:from:date:message-id :subject:to:cc; bh=vuYtjvfyvi8ytce0ywIBoWrGyHZCCxyIS9zmOf8r5E0=; b=NMfWoa4r9y5JTr+j9JvolmVW246M6iYhdHzdjQTjAr1PcscrV6Ys5RrmYUry8MWMtP 0WD+9WoqU3vQ5/nRNSYNG/F4xbqgw3mlqNfSUASgL6bwbbRSKnwdF0XAWk7GrCxFg0Iq toYIIdsVfGeD4VXZRMw/VhUXYfnXezJCP98q99rjS6G/sdwBkgyshPIUyAH3esyXye9/ cjCpZgcrDA6wTORLdoLY2XyqvxB8emugOHMHRM1vPfu14f2yJP+K85CNdk7Ia9B859uM PQk/x/1yxkxGQqMWnMWL+7Y5dOKx4qk/c3OtuhKQNsVZwsjQJXE3KH96kOLbuXVwX1k2 YMlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:sender:in-reply-to:references:from :date:message-id:subject:to:cc; bh=vuYtjvfyvi8ytce0ywIBoWrGyHZCCxyIS9zmOf8r5E0=; b=f0NMr2TRUFN5SttdyuEKQeeL0bB5Me6RFjt0WFSFtyHXNKkAGPyxzmI208qELygp5n II740WzdVIDo9CRlHd+PRlA1LPAmus9VRkRcmsxEPxZb5u2btEOIatpGrSgkLHAxy8EW Qgse6Es1GEQhu9cwXkmLPUgP+iSyKQQFpwBOONhoaplSiKQ4Di0Cvb7G37Hh8pStKOQF XVaPKE++qV7L1CUcy/y5OOOEt5eTX4G0HGGWa5AIWOfwoGpv5C0h1vRpVPjIjUfKNnch MKrURBDO/mVxoSnoxNAelyuAdvbjp4Hq13QvpsIklZuDOXMKozOrkNTdLb4r/XimGO2k r7SA== X-Gm-Message-State: AEkoouu+t0GBANh76DZiO7WmHtGugrlQIwY/3h+0WotYGu7PADhxPfgTHmiJQLvmClzTVkDATqF6EeVXXqaOAQ== X-Received: by 10.176.82.58 with SMTP id i55mr16955724uaa.103.1470641015310; Mon, 08 Aug 2016 00:23:35 -0700 (PDT) MIME-Version: 1.0 Received: by 10.103.2.18 with HTTP; Mon, 8 Aug 2016 00:23:33 -0700 (PDT) In-Reply-To: References: <1470071280-78706-1-git-send-email-thgarnie@google.com> <2213000.eZV9GAcFWG@vostro.rjw.lan> <2869477.o6QceH2ItE@vostro.rjw.lan> From: Yinghai Lu Date: Mon, 8 Aug 2016 00:23:33 -0700 X-Google-Sender-Auth: ZB23mjp4MMJtOaUQgWimY4r4NCE Message-ID: Subject: Re: [PATCH v2] x86/power/64: Support unaligned addresses for temporary mapping To: "Rafael J. Wysocki" Cc: Thomas Garnier , "Rafael J. Wysocki" , Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" , Kees Cook , Pavel Machek , "the arch/x86 maintainers" , Linux Kernel Mailing List , Linux PM list , "kernel-hardening@lists.openwall.com" Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Mon, Aug 8, 2016 at 12:06 AM, Yinghai Lu wrote: >> >>> At the same time, set_up_temporary_text_mapping could be replaced with >>> kernel_ident_mapping_init() too if restore_jump_address is KVA for >>> jump_address_phys. >> >> I see no reason to do that. >> >> First, it is not guaranteed that restore_jump_address will always be a KVA for >> jump_address_phys and second, it really is only necessary to map one PMD in >> there. > > With your v2 version, you could pass difference between restore_jump_address and > jump_address_phys as info->off ? > With that, we can kill more lines if replace with > set_up_temporary_text_mapping with > kernel_ident_mapping_init() and make code more readable. > > But just keep that in separated patch after your v2 patch. like: --- arch/x86/power/hibernate_64.c | 55 ++++++++++++------------------------------ 1 file changed, 17 insertions(+), 38 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-pm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Index: linux-2.6/arch/x86/power/hibernate_64.c =================================================================== --- linux-2.6.orig/arch/x86/power/hibernate_64.c +++ linux-2.6/arch/x86/power/hibernate_64.c @@ -41,42 +41,6 @@ unsigned long temp_level4_pgt __visible; unsigned long relocated_restore_code __visible; -static int set_up_temporary_text_mapping(pgd_t *pgd) -{ - pmd_t *pmd; - pud_t *pud; - - /* - * The new mapping only has to cover the page containing the image - * kernel's entry point (jump_address_phys), because the switch over to - * it is carried out by relocated code running from a page allocated - * specifically for this purpose and covered by the identity mapping, so - * the temporary kernel text mapping is only needed for the final jump. - * Moreover, in that mapping the virtual address of the image kernel's - * entry point must be the same as its virtual address in the image - * kernel (restore_jump_address), so the image kernel's - * restore_registers() code doesn't find itself in a different area of - * the virtual address space after switching over to the original page - * tables used by the image kernel. - */ - pud = (pud_t *)get_safe_page(GFP_ATOMIC); - if (!pud) - return -ENOMEM; - - pmd = (pmd_t *)get_safe_page(GFP_ATOMIC); - if (!pmd) - return -ENOMEM; - - set_pmd(pmd + pmd_index(restore_jump_address), - __pmd((jump_address_phys & PMD_MASK) | __PAGE_KERNEL_LARGE_EXEC)); - set_pud(pud + pud_index(restore_jump_address), - __pud(__pa(pmd) | _KERNPG_TABLE)); - set_pgd(pgd + pgd_index(restore_jump_address), - __pgd(__pa(pud) | _KERNPG_TABLE)); - - return 0; -} - static void *alloc_pgt_page(void *context) { return (void *)get_safe_page(GFP_ATOMIC); @@ -87,7 +51,6 @@ static int set_up_temporary_mappings(voi struct x86_mapping_info info = { .alloc_pgt_page = alloc_pgt_page, .pmd_flag = __PAGE_KERNEL_LARGE_EXEC, - .offset = __PAGE_OFFSET, }; unsigned long mstart, mend; pgd_t *pgd; @@ -99,11 +62,27 @@ static int set_up_temporary_mappings(voi return -ENOMEM; /* Prepare a temporary mapping for the kernel text */ - result = set_up_temporary_text_mapping(pgd); + /* + * The new mapping only has to cover the page containing the image + * kernel's entry point (jump_address_phys), because the switch over to + * it is carried out by relocated code running from a page allocated + * specifically for this purpose and covered by the identity mapping, so + * the temporary kernel text mapping is only needed for the final jump. + * Moreover, in that mapping the virtual address of the image kernel's + * entry point must be the same as its virtual address in the image + * kernel (restore_jump_address), so the image kernel's + * restore_registers() code doesn't find itself in a different area of + * the virtual address space after switching over to the original page + * tables used by the image kernel. + */ + info.offset = restore_jump_address - jump_address_phys; + result = kernel_ident_mapping_init(&info, pgd, jump_address_phys, + jump_address_phys + PMD_SIZE); if (result) return result; /* Set up the direct mapping from scratch */ + info.offset = __PAGE_OFFSET; for (i = 0; i < nr_pfn_mapped; i++) { mstart = pfn_mapped[i].start << PAGE_SHIFT; mend = pfn_mapped[i].end << PAGE_SHIFT;