From patchwork Sun Nov 1 17:08:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 11872233 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 74F36697 for ; Sun, 1 Nov 2020 17:08:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5509922265 for ; Sun, 1 Nov 2020 17:08:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1604250533; bh=dUMHwnnu2V9bK8cozImxOCGaZ/tXiiUzhhpGO3GaChY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=adHLGgEV4QJRLOCwi5zLeltVsba/rILu5zgUFRPzCErJNYlWBPLt34bzjrIYG9fJP RU/A4W26OC7pdpxKhiiDO0nSQvgitRx5ohdUpgdwy0mNadOpvN1aVJhxlOuzRhs8js Ty4xksSwf2lTtbe9LWwNDc98GibpBlK8XD3dcGdk= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727118AbgKARIt (ORCPT ); Sun, 1 Nov 2020 12:08:49 -0500 Received: from mail.kernel.org ([198.145.29.99]:41778 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727056AbgKARIt (ORCPT ); Sun, 1 Nov 2020 12:08:49 -0500 Received: from aquarius.haifa.ibm.com (nesher1.haifa.il.ibm.com [195.110.40.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 2C98422260; Sun, 1 Nov 2020 17:08:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1604250528; bh=dUMHwnnu2V9bK8cozImxOCGaZ/tXiiUzhhpGO3GaChY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=y8DdCsoEbCT1j1cJCzFjunZcggq+70YveMPOBRK5qSAeigzt9fuT4WJZUY2q/4mjd di6GScWwlWHAVq4BQesokAzq5xMZfijKYizw78DJ4AIcsPDpEOA03QYR1EZ/SNR8kt 1bs2WVLGeMQyBwZP4g5Pe1FmzHFxDg1zE9DjpqZI= From: Mike Rapoport To: Andrew Morton Cc: Albert Ou , Andy Lutomirski , Benjamin Herrenschmidt , Borislav Petkov , Catalin Marinas , Christian Borntraeger , Christoph Lameter , "David S. Miller" , Dave Hansen , David Hildenbrand , David Rientjes , "Edgecombe, Rick P" , "H. Peter Anvin" , Heiko Carstens , Ingo Molnar , Joonsoo Kim , "Kirill A. Shutemov" , Len Brown , Michael Ellerman , Mike Rapoport , Mike Rapoport , Palmer Dabbelt , Paul Mackerras , Paul Walmsley , Pavel Machek , Pekka Enberg , Peter Zijlstra , "Rafael J. Wysocki" , Thomas Gleixner , Vasily Gorbik , Will Deacon , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-pm@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, x86@kernel.org, "Rafael J . Wysocki" Subject: [PATCH v3 2/4] PM: hibernate: make direct map manipulations more explicit Date: Sun, 1 Nov 2020 19:08:13 +0200 Message-Id: <20201101170815.9795-3-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201101170815.9795-1-rppt@kernel.org> References: <20201101170815.9795-1-rppt@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Mike Rapoport When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be not present in the direct map and has to be explicitly mapped before it could be copied. Introduce hibernate_map_page() that will explicitly use set_direct_map_{default,invalid}_noflush() for ARCH_HAS_SET_DIRECT_MAP case and debug_pagealloc_map_pages() for DEBUG_PAGEALLOC case. The remapping of the pages in safe_copy_page() presumes that it only changes protection bits in an existing PTE and so it is safe to ignore return value of set_direct_map_{default,invalid}_noflush(). Still, add a WARN_ON() so that future changes in set_memory APIs will not silently break hibernation. Signed-off-by: Mike Rapoport Acked-by: Rafael J. Wysocki Reviewed-by: David Hildenbrand --- include/linux/mm.h | 12 ------------ kernel/power/snapshot.c | 30 ++++++++++++++++++++++++++++-- 2 files changed, 28 insertions(+), 14 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 1fc0609056dc..14e397f3752c 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2927,16 +2927,6 @@ static inline bool debug_pagealloc_enabled_static(void) #if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP) extern void __kernel_map_pages(struct page *page, int numpages, int enable); -/* - * When called in DEBUG_PAGEALLOC context, the call should most likely be - * guarded by debug_pagealloc_enabled() or debug_pagealloc_enabled_static() - */ -static inline void -kernel_map_pages(struct page *page, int numpages, int enable) -{ - __kernel_map_pages(page, numpages, enable); -} - static inline void debug_pagealloc_map_pages(struct page *page, int numpages, int enable) { @@ -2948,8 +2938,6 @@ static inline void debug_pagealloc_map_pages(struct page *page, extern bool kernel_page_present(struct page *page); #endif /* CONFIG_HIBERNATION */ #else /* CONFIG_DEBUG_PAGEALLOC || CONFIG_ARCH_HAS_SET_DIRECT_MAP */ -static inline void -kernel_map_pages(struct page *page, int numpages, int enable) {} static inline void debug_pagealloc_map_pages(struct page *page, int numpages, int enable) {} #ifdef CONFIG_HIBERNATION diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c index 46b1804c1ddf..054c8cce4236 100644 --- a/kernel/power/snapshot.c +++ b/kernel/power/snapshot.c @@ -76,6 +76,32 @@ static inline void hibernate_restore_protect_page(void *page_address) {} static inline void hibernate_restore_unprotect_page(void *page_address) {} #endif /* CONFIG_STRICT_KERNEL_RWX && CONFIG_ARCH_HAS_SET_MEMORY */ +static inline void hibernate_map_page(struct page *page, int enable) +{ + if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) { + unsigned long addr = (unsigned long)page_address(page); + int ret; + + /* + * This should not fail because remapping a page here means + * that we only update protection bits in an existing PTE. + * It is still worth to have WARN_ON() here if something + * changes and this will no longer be the case. + */ + if (enable) + ret = set_direct_map_default_noflush(page); + else + ret = set_direct_map_invalid_noflush(page); + + if (WARN_ON(ret)) + return; + + flush_tlb_kernel_range(addr, addr + PAGE_SIZE); + } else { + debug_pagealloc_map_pages(page, 1, enable); + } +} + static int swsusp_page_is_free(struct page *); static void swsusp_set_page_forbidden(struct page *); static void swsusp_unset_page_forbidden(struct page *); @@ -1355,9 +1381,9 @@ static void safe_copy_page(void *dst, struct page *s_page) if (kernel_page_present(s_page)) { do_copy_page(dst, page_address(s_page)); } else { - kernel_map_pages(s_page, 1, 1); + hibernate_map_page(s_page, 1); do_copy_page(dst, page_address(s_page)); - kernel_map_pages(s_page, 1, 0); + hibernate_map_page(s_page, 0); } }