From patchwork Fri Apr 26 00:11:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 10918693 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5DC8315E9 for ; Fri, 26 Apr 2019 07:33:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4E25D28DA2 for ; Fri, 26 Apr 2019 07:33:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4215D28DA4; Fri, 26 Apr 2019 07:33:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.4 required=2.0 tests=BAYES_00,DATE_IN_PAST_06_12, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=no version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6F89628DA3 for ; Fri, 26 Apr 2019 07:33:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 177056B0283; Fri, 26 Apr 2019 03:33:55 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0FF336B028F; Fri, 26 Apr 2019 03:33:55 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EE0186B0284; Fri, 26 Apr 2019 03:33:54 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) by kanga.kvack.org (Postfix) with ESMTP id B0B016B028F for ; Fri, 26 Apr 2019 03:33:54 -0400 (EDT) Received: by mail-pf1-f198.google.com with SMTP id s26so1649654pfm.18 for ; Fri, 26 Apr 2019 00:33:54 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version; bh=QhBGq/JNavwlurWIVFo7GWw2xI+63VIDOiZrYQv9auY=; b=hv2eC5gvYKEMxpgFHEpgkRMZ40z2MfMJIvTSHBFdQ2RFbwRy3f4z6N/ruKKoB1FYPs aIGAwr7K85jFqn5F4F0TDtGbwfL7m5xQ4AorkqudxbQcuvYy1odf0KyUj1IlLvYNF1U2 vOEpy5xWP8B7qjJb+Y3xnum6yxXQZH+2ezE3wm3RQvB8az9Y+Hou1fMf1JZJVxmV66On g2maFzhlUNrzXhg9gglZ+6v5peuIC+8xzDZpTbH/xWLo96D7gpPbTJX5E2WRGxRfp0qU 60uCjqY1eAgCT6dIi3IvYf/nw7HFjaMx9EttD61jfqy8JFnNR7ZpItrCi5sclm1sQ9af s4xg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of namit@vmware.com designates 208.91.0.189 as permitted sender) smtp.mailfrom=namit@vmware.com; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=vmware.com X-Gm-Message-State: APjAAAWxEjoF4PmZ8UCDUQdCt+cX5enDYtI9BL8orc7qkaHBNdD4ssbz 2kqHAzmZWeWyoEx0itmXAJrKJBLqFTV+ewA7eXm4rQg15UeZxlvZZC6ckz4632rBIwkaRxXgkfi eijemH8NZVos4zzdvKmFITNy113hsevmAHm6DyitVsLzsBq+YkR9lsof1clmxVOqWUA== X-Received: by 2002:a17:902:4101:: with SMTP id e1mr45472959pld.25.1556264034376; Fri, 26 Apr 2019 00:33:54 -0700 (PDT) X-Google-Smtp-Source: APXvYqziGnB/Wz0fuBoOjeXlkNp2hL0a3qic3M0fgQTWdeDpqEGfxcebYdsO8/3PSSiBL9OVxlcv X-Received: by 2002:a17:902:4101:: with SMTP id e1mr45463377pld.25.1556263907080; Fri, 26 Apr 2019 00:31:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1556263907; cv=none; d=google.com; s=arc-20160816; b=zn9hK+uGthdJYZRQhzvqdyxqT/glgS+a2ZITWGgtjTYJ3eTGxdG3dP5idAJyXdv1q6 IFc/yPpC4gEzXHxFYC95G0JtTBKpdUguje8xqu0r6du6l9crRl+3zRLJCmSebFIJ4/z5 Uek4wozO1KaZM1ZLkcO5KIRhthuII0NxuZYYigBQ5uPsSs2M+GvzjrcA408CfZXyC+A/ WkU+c2SX/MLh8Y1TuXoP/1+iS37da5m8l3VJIOOqXau80ay+0x2oxeYOnrbGkrrUOzYX Otr0xHl18eOZ4amwtI80W3rKZZDi1trim2F6Q3HgdOc/vjEiThSkyu4t3yL3I5K486f4 rFPw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from; bh=QhBGq/JNavwlurWIVFo7GWw2xI+63VIDOiZrYQv9auY=; b=vOFHJeCrnk3cL9EQ6BwMt+2qT1ykOzRsfG46UfGfAgE6VHU/39LG7pBPNj+ly40GWc h3JETjWNPyag1/10NxeF8ZptIBryjPfB3TNQrt+YlWrD6TcEOXEHL6IvYJJweikzwC4v gQxZagteve+CZBPGfFSP45Y6eBYm4V8sLli3st++wNyedgvQjpqN5F/XwE0P5YFQy+Rb faFJerDt2RcNiaLArn0Ro5Wko4CcVj7q3l4Z/BWeFd751V1m3iZ365WovXJrTHjNtOFk 1IHMgAi+bBXJpY01gPtcjHLG5nAF/savYzNDHC+zMsWfZBbmA63XoeJOsD/J2iKmmpz/ 8IHw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of namit@vmware.com designates 208.91.0.189 as permitted sender) smtp.mailfrom=namit@vmware.com; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=vmware.com Received: from EX13-EDG-OU-001.vmware.com (ex13-edg-ou-001.vmware.com. [208.91.0.189]) by mx.google.com with ESMTPS id v82si25417769pfa.42.2019.04.26.00.31.46 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 26 Apr 2019 00:31:47 -0700 (PDT) Received-SPF: pass (google.com: domain of namit@vmware.com designates 208.91.0.189 as permitted sender) client-ip=208.91.0.189; Authentication-Results: mx.google.com; spf=pass (google.com: domain of namit@vmware.com designates 208.91.0.189 as permitted sender) smtp.mailfrom=namit@vmware.com; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=vmware.com Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Fri, 26 Apr 2019 00:31:40 -0700 Received: from sc2-haas01-esx0118.eng.vmware.com (sc2-haas01-esx0118.eng.vmware.com [10.172.44.118]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id E15254129A; Fri, 26 Apr 2019 00:31:45 -0700 (PDT) From: Nadav Amit To: Peter Zijlstra , Borislav Petkov , Andy Lutomirski , Ingo Molnar CC: , , , Thomas Gleixner , Nadav Amit , Dave Hansen , , , , , , , , , , , Rick Edgecombe , "Rafael J. Wysocki" , Pavel Machek Subject: [PATCH v5 15/23] mm: Make hibernate handle unmapped pages Date: Thu, 25 Apr 2019 17:11:35 -0700 Message-ID: <20190426001143.4983-16-namit@vmware.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190426001143.4983-1-namit@vmware.com> References: <20190426001143.4983-1-namit@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-001.vmware.com: namit@vmware.com does not designate permitted sender hosts) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Rick Edgecombe Make hibernate handle unmapped pages on the direct map when CONFIG_ARCH_HAS_SET_ALIAS is set. These functions allow for setting pages to invalid configurations, so now hibernate should check if the pages have valid mappings and handle if they are unmapped when doing a hibernate save operation. Previously this checking was already done when CONFIG_DEBUG_PAGEALLOC was configured. It does not appear to have a big hibernating performance impact. The speed of the saving operation before this change was measured as 819.02 MB/s, and after was measured at 813.32 MB/s. Before: [ 4.670938] PM: Wrote 171996 kbytes in 0.21 seconds (819.02 MB/s) After: [ 4.504714] PM: Wrote 178932 kbytes in 0.22 seconds (813.32 MB/s) Cc: Dave Hansen Cc: Andy Lutomirski Cc: Peter Zijlstra Cc: "Rafael J. Wysocki" Cc: Pavel Machek Cc: Borislav Petkov Acked-by: Pavel Machek Signed-off-by: Rick Edgecombe --- arch/x86/mm/pageattr.c | 4 ---- include/linux/mm.h | 18 ++++++------------ kernel/power/snapshot.c | 5 +++-- mm/page_alloc.c | 7 +++++-- 4 files changed, 14 insertions(+), 20 deletions(-) diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c index 3574550192c6..daf4d645e537 100644 --- a/arch/x86/mm/pageattr.c +++ b/arch/x86/mm/pageattr.c @@ -2257,7 +2257,6 @@ int set_direct_map_default_noflush(struct page *page) return __set_pages_p(page, 1); } -#ifdef CONFIG_DEBUG_PAGEALLOC void __kernel_map_pages(struct page *page, int numpages, int enable) { if (PageHighMem(page)) @@ -2302,11 +2301,8 @@ bool kernel_page_present(struct page *page) pte = lookup_address((unsigned long)page_address(page), &level); return (pte_val(*pte) & _PAGE_PRESENT); } - #endif /* CONFIG_HIBERNATION */ -#endif /* CONFIG_DEBUG_PAGEALLOC */ - int __init kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address, unsigned numpages, unsigned long page_flags) { diff --git a/include/linux/mm.h b/include/linux/mm.h index 6b10c21630f5..083d7b4863ed 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2610,37 +2610,31 @@ static inline void kernel_poison_pages(struct page *page, int numpages, int enable) { } #endif -#ifdef CONFIG_DEBUG_PAGEALLOC extern bool _debug_pagealloc_enabled; -extern void __kernel_map_pages(struct page *page, int numpages, int enable); static inline bool debug_pagealloc_enabled(void) { - return _debug_pagealloc_enabled; + return IS_ENABLED(CONFIG_DEBUG_PAGEALLOC) && _debug_pagealloc_enabled; } +#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP) +extern void __kernel_map_pages(struct page *page, int numpages, int enable); + static inline void kernel_map_pages(struct page *page, int numpages, int enable) { - if (!debug_pagealloc_enabled()) - return; - __kernel_map_pages(page, numpages, enable); } #ifdef CONFIG_HIBERNATION extern bool kernel_page_present(struct page *page); #endif /* CONFIG_HIBERNATION */ -#else /* CONFIG_DEBUG_PAGEALLOC */ +#else /* CONFIG_DEBUG_PAGEALLOC || CONFIG_ARCH_HAS_SET_DIRECT_MAP */ static inline void kernel_map_pages(struct page *page, int numpages, int enable) {} #ifdef CONFIG_HIBERNATION static inline bool kernel_page_present(struct page *page) { return true; } #endif /* CONFIG_HIBERNATION */ -static inline bool debug_pagealloc_enabled(void) -{ - return false; -} -#endif /* CONFIG_DEBUG_PAGEALLOC */ +#endif /* CONFIG_DEBUG_PAGEALLOC || CONFIG_ARCH_HAS_SET_DIRECT_MAP */ #ifdef __HAVE_ARCH_GATE_AREA extern struct vm_area_struct *get_gate_vma(struct mm_struct *mm); diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c index f08a1e4ee1d4..bc9558ab1e5b 100644 --- a/kernel/power/snapshot.c +++ b/kernel/power/snapshot.c @@ -1342,8 +1342,9 @@ static inline void do_copy_page(long *dst, long *src) * safe_copy_page - Copy a page in a safe way. * * Check if the page we are going to copy is marked as present in the kernel - * page tables (this always is the case if CONFIG_DEBUG_PAGEALLOC is not set - * and in that case kernel_page_present() always returns 'true'). + * page tables. This always is the case if CONFIG_DEBUG_PAGEALLOC or + * CONFIG_ARCH_HAS_SET_DIRECT_MAP is not set. In that case kernel_page_present() + * always returns 'true'. */ static void safe_copy_page(void *dst, struct page *s_page) { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d96ca5bc555b..34a70681a4af 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1131,7 +1131,9 @@ static __always_inline bool free_pages_prepare(struct page *page, } arch_free_page(page, order); kernel_poison_pages(page, 1 << order, 0); - kernel_map_pages(page, 1 << order, 0); + if (debug_pagealloc_enabled()) + kernel_map_pages(page, 1 << order, 0); + kasan_free_nondeferred_pages(page, order); return true; @@ -2001,7 +2003,8 @@ inline void post_alloc_hook(struct page *page, unsigned int order, set_page_refcounted(page); arch_alloc_page(page, order); - kernel_map_pages(page, 1 << order, 1); + if (debug_pagealloc_enabled()) + kernel_map_pages(page, 1 << order, 1); kasan_alloc_pages(page, order); kernel_poison_pages(page, 1 << order, 1); set_page_owner(page, order, gfp_flags);