From patchwork Wed Sep 25 14:30:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 11160807 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7B56B924 for ; Wed, 25 Sep 2019 14:31:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 46D5A222BD for ; Wed, 25 Sep 2019 14:31:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 46D5A222BD Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 42C236B0269; Wed, 25 Sep 2019 10:31:22 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3F6C26B000E; Wed, 25 Sep 2019 10:31:22 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1FB7C6B0269; Wed, 25 Sep 2019 10:31:22 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0074.hostedemail.com [216.40.44.74]) by kanga.kvack.org (Postfix) with ESMTP id EDF3F6B000C for ; Wed, 25 Sep 2019 10:31:21 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 887768243760 for ; Wed, 25 Sep 2019 14:31:21 +0000 (UTC) X-FDA: 75973680762.01.help15_802daae66fb21 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,vbabka@suse.cz,:akpm@linux-foundation.org::linux-kernel@vger.kernel.org:kasan-dev@googlegroups.com:cai@lca.pw:kirill.shutemov@linux.intel.com:willy@infradead.org:mgorman@techsingularity.net:mhocko@kernel.org:vbabka@suse.cz:kirill@shutemov.name,RULES_HIT:30012:30054:30070:30075,0,RBL:195.135.220.15:@suse.cz:.lbl8.mailshell.net-62.2.6.2 64.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:39,LUA_SUMMARY:none X-HE-Tag: help15_802daae66fb21 X-Filterd-Recvd-Size: 6924 Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Sep 2019 14:31:20 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id EC307AF5A; Wed, 25 Sep 2019 14:31:18 +0000 (UTC) From: Vlastimil Babka To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, Qian Cai , "Kirill A. Shutemov" , Matthew Wilcox , Mel Gorman , Michal Hocko , Vlastimil Babka , "Kirill A . Shutemov" Subject: [PATCH 1/3] mm, page_owner: fix off-by-one error in __set_page_owner_handle() Date: Wed, 25 Sep 2019 16:30:50 +0200 Message-Id: <20190925143056.25853-2-vbabka@suse.cz> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190925143056.25853-1-vbabka@suse.cz> References: <20190925143056.25853-1-vbabka@suse.cz> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: As noted by Kirill, commit 7e2f2a0cd17c ("mm, page_owner: record page owner for each subpage") has introduced an off-by-one error in __set_page_owner_handle() when looking up page_ext for subpages. As a result, the head page page_owner info is set twice, while for the last tail page, it's not set at all. Fix this and also make the code more efficient by advancing the page_ext pointer we already have, instead of calling lookup_page_ext() for each subpage. Since the full size of struct page_ext is not known at compile time, we can't use a simple page_ext++ statement, so introduce a page_ext_next() inline function for that. Reported-by: Kirill A. Shutemov Fixes: 7e2f2a0cd17c ("mm, page_owner: record page owner for each subpage") Signed-off-by: Vlastimil Babka Acked-by: Kirill A. Shutemov --- include/linux/page_ext.h | 8 ++++++++ mm/page_ext.c | 23 +++++++++-------------- mm/page_owner.c | 15 +++++++-------- 3 files changed, 24 insertions(+), 22 deletions(-) diff --git a/include/linux/page_ext.h b/include/linux/page_ext.h index 682fd465df06..5e856512bafb 100644 --- a/include/linux/page_ext.h +++ b/include/linux/page_ext.h @@ -36,6 +36,7 @@ struct page_ext { unsigned long flags; }; +extern unsigned long page_ext_size; extern void pgdat_page_ext_init(struct pglist_data *pgdat); #ifdef CONFIG_SPARSEMEM @@ -52,6 +53,13 @@ static inline void page_ext_init(void) struct page_ext *lookup_page_ext(const struct page *page); +static inline struct page_ext *page_ext_next(struct page_ext *curr) +{ + void *next = curr; + next += page_ext_size; + return next; +} + #else /* !CONFIG_PAGE_EXTENSION */ struct page_ext; diff --git a/mm/page_ext.c b/mm/page_ext.c index 5f5769c7db3b..4ade843ff588 100644 --- a/mm/page_ext.c +++ b/mm/page_ext.c @@ -67,8 +67,9 @@ static struct page_ext_operations *page_ext_ops[] = { #endif }; +unsigned long page_ext_size = sizeof(struct page_ext); + static unsigned long total_usage; -static unsigned long extra_mem; static bool __init invoke_need_callbacks(void) { @@ -78,9 +79,8 @@ static bool __init invoke_need_callbacks(void) for (i = 0; i < entries; i++) { if (page_ext_ops[i]->need && page_ext_ops[i]->need()) { - page_ext_ops[i]->offset = sizeof(struct page_ext) + - extra_mem; - extra_mem += page_ext_ops[i]->size; + page_ext_ops[i]->offset = page_ext_size; + page_ext_size += page_ext_ops[i]->size; need = true; } } @@ -99,14 +99,9 @@ static void __init invoke_init_callbacks(void) } } -static unsigned long get_entry_size(void) -{ - return sizeof(struct page_ext) + extra_mem; -} - static inline struct page_ext *get_entry(void *base, unsigned long index) { - return base + get_entry_size() * index; + return base + page_ext_size * index; } #if !defined(CONFIG_SPARSEMEM) @@ -156,7 +151,7 @@ static int __init alloc_node_page_ext(int nid) !IS_ALIGNED(node_end_pfn(nid), MAX_ORDER_NR_PAGES)) nr_pages += MAX_ORDER_NR_PAGES; - table_size = get_entry_size() * nr_pages; + table_size = page_ext_size * nr_pages; base = memblock_alloc_try_nid( table_size, PAGE_SIZE, __pa(MAX_DMA_ADDRESS), @@ -234,7 +229,7 @@ static int __meminit init_section_page_ext(unsigned long pfn, int nid) if (section->page_ext) return 0; - table_size = get_entry_size() * PAGES_PER_SECTION; + table_size = page_ext_size * PAGES_PER_SECTION; base = alloc_page_ext(table_size, nid); /* @@ -254,7 +249,7 @@ static int __meminit init_section_page_ext(unsigned long pfn, int nid) * we need to apply a mask. */ pfn &= PAGE_SECTION_MASK; - section->page_ext = (void *)base - get_entry_size() * pfn; + section->page_ext = (void *)base - page_ext_size * pfn; total_usage += table_size; return 0; } @@ -267,7 +262,7 @@ static void free_page_ext(void *addr) struct page *page = virt_to_page(addr); size_t table_size; - table_size = get_entry_size() * PAGES_PER_SECTION; + table_size = page_ext_size * PAGES_PER_SECTION; BUG_ON(PageReserved(page)); kmemleak_free(addr); diff --git a/mm/page_owner.c b/mm/page_owner.c index dee931184788..d3cf5d336ccf 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -156,10 +156,10 @@ void __reset_page_owner(struct page *page, unsigned int order) handle = save_stack(GFP_NOWAIT | __GFP_NOWARN); #endif + page_ext = lookup_page_ext(page); + if (unlikely(!page_ext)) + return; for (i = 0; i < (1 << order); i++) { - page_ext = lookup_page_ext(page + i); - if (unlikely(!page_ext)) - continue; __clear_bit(PAGE_EXT_OWNER_ACTIVE, &page_ext->flags); #ifdef CONFIG_DEBUG_PAGEALLOC if (debug_pagealloc_enabled()) { @@ -167,6 +167,7 @@ void __reset_page_owner(struct page *page, unsigned int order) page_owner->free_handle = handle; } #endif + page_ext = page_ext_next(page_ext); } } @@ -186,7 +187,7 @@ static inline void __set_page_owner_handle(struct page *page, __set_bit(PAGE_EXT_OWNER, &page_ext->flags); __set_bit(PAGE_EXT_OWNER_ACTIVE, &page_ext->flags); - page_ext = lookup_page_ext(page + i); + page_ext = page_ext_next(page_ext); } } @@ -224,12 +225,10 @@ void __split_page_owner(struct page *page, unsigned int order) if (unlikely(!page_ext)) return; - page_owner = get_page_owner(page_ext); - page_owner->order = 0; - for (i = 1; i < (1 << order); i++) { - page_ext = lookup_page_ext(page + i); + for (i = 0; i < (1 << order); i++) { page_owner = get_page_owner(page_ext); page_owner->order = 0; + page_ext = page_ext_next(page_ext); } } From patchwork Wed Sep 25 14:30:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 11160813 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8B5F7924 for ; Wed, 25 Sep 2019 14:31:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 61C492146E for ; Wed, 25 Sep 2019 14:31:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 61C492146E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BE91C6B000C; Wed, 25 Sep 2019 10:31:22 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 99DAC6B026B; Wed, 25 Sep 2019 10:31:22 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 68F406B000C; Wed, 25 Sep 2019 10:31:22 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0075.hostedemail.com [216.40.44.75]) by kanga.kvack.org (Postfix) with ESMTP id 111306B0010 for ; Wed, 25 Sep 2019 10:31:22 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id A3D01180AD7C1 for ; Wed, 25 Sep 2019 14:31:21 +0000 (UTC) X-FDA: 75973680762.11.rate17_803ab855ff512 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,vbabka@suse.cz,:akpm@linux-foundation.org::linux-kernel@vger.kernel.org:kasan-dev@googlegroups.com:cai@lca.pw:kirill.shutemov@linux.intel.com:willy@infradead.org:mgorman@techsingularity.net:mhocko@kernel.org:vbabka@suse.cz:dvyukov@google.com:walter-zh.wu@mediatek.com:aryabinin@virtuozzo.com,RULES_HIT:30054,0,RBL:195.135.220.15:@suse.cz:.lbl8.mailshell.net-62.2.6.2 64.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:2,LUA_SUMMARY:none X-HE-Tag: rate17_803ab855ff512 X-Filterd-Recvd-Size: 6255 Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Sep 2019 14:31:21 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id F14A1B02E; Wed, 25 Sep 2019 14:31:18 +0000 (UTC) From: Vlastimil Babka To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, Qian Cai , "Kirill A. Shutemov" , Matthew Wilcox , Mel Gorman , Michal Hocko , Vlastimil Babka , Dmitry Vyukov , Walter Wu , Andrey Ryabinin Subject: [PATCH 2/3] mm, debug, kasan: save and dump freeing stack trace for kasan Date: Wed, 25 Sep 2019 16:30:51 +0200 Message-Id: <20190925143056.25853-3-vbabka@suse.cz> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190925143056.25853-1-vbabka@suse.cz> References: <20190925143056.25853-1-vbabka@suse.cz> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The commit 8974558f49a6 ("mm, page_owner, debug_pagealloc: save and dump freeing stack trace") enhanced page_owner to also store freeing stack trace, when debug_pagealloc is also enabled. KASAN would also like to do this [1] to improve error reports to debug e.g. UAF issues. This patch therefore introduces a helper config option PAGE_OWNER_FREE_STACK, which is enabled when PAGE_OWNER and either of DEBUG_PAGEALLOC or KASAN is enabled. Boot-time, the free stack saving is enabled when booting a KASAN kernel with page_owner=on, or non-KASAN kernel with debug_pagealloc=on and page_owner=on. [1] https://bugzilla.kernel.org/show_bug.cgi?id=203967 Suggested-by: Dmitry Vyukov Suggested-by: Walter Wu Suggested-by: Andrey Ryabinin Signed-off-by: Vlastimil Babka Reviewed-by: Andrey Ryabinin --- Documentation/dev-tools/kasan.rst | 4 ++++ mm/Kconfig.debug | 4 ++++ mm/page_owner.c | 31 ++++++++++++++++++------------- 3 files changed, 26 insertions(+), 13 deletions(-) diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst index b72d07d70239..434e605030e9 100644 --- a/Documentation/dev-tools/kasan.rst +++ b/Documentation/dev-tools/kasan.rst @@ -41,6 +41,10 @@ smaller binary while the latter is 1.1 - 2 times faster. Both KASAN modes work with both SLUB and SLAB memory allocators. For better bug detection and nicer reporting, enable CONFIG_STACKTRACE. +To augment reports with last allocation and freeing stack of the physical +page, it is recommended to configure kernel also with CONFIG_PAGE_OWNER = y +and boot with page_owner=on. + To disable instrumentation for specific files or directories, add a line similar to the following to the respective kernel Makefile: diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug index 327b3ebf23bf..1ea247da3322 100644 --- a/mm/Kconfig.debug +++ b/mm/Kconfig.debug @@ -62,6 +62,10 @@ config PAGE_OWNER If unsure, say N. +config PAGE_OWNER_FREE_STACK + def_bool KASAN || DEBUG_PAGEALLOC + depends on PAGE_OWNER + config PAGE_POISONING bool "Poison pages after freeing" select PAGE_POISONING_NO_SANITY if HIBERNATION diff --git a/mm/page_owner.c b/mm/page_owner.c index d3cf5d336ccf..f3aeec78822f 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -24,13 +24,14 @@ struct page_owner { short last_migrate_reason; gfp_t gfp_mask; depot_stack_handle_t handle; -#ifdef CONFIG_DEBUG_PAGEALLOC +#ifdef CONFIG_PAGE_OWNER_FREE_STACK depot_stack_handle_t free_handle; #endif }; static bool page_owner_disabled = true; DEFINE_STATIC_KEY_FALSE(page_owner_inited); +static DEFINE_STATIC_KEY_FALSE(page_owner_free_stack); static depot_stack_handle_t dummy_handle; static depot_stack_handle_t failure_handle; @@ -91,6 +92,8 @@ static void init_page_owner(void) register_failure_stack(); register_early_stack(); static_branch_enable(&page_owner_inited); + if (IS_ENABLED(CONFIG_KASAN) || debug_pagealloc_enabled()) + static_branch_enable(&page_owner_free_stack); init_early_allocated_pages(); } @@ -148,11 +151,11 @@ void __reset_page_owner(struct page *page, unsigned int order) { int i; struct page_ext *page_ext; -#ifdef CONFIG_DEBUG_PAGEALLOC +#ifdef CONFIG_PAGE_OWNER_FREE_STACK depot_stack_handle_t handle = 0; struct page_owner *page_owner; - if (debug_pagealloc_enabled()) + if (static_branch_unlikely(&page_owner_free_stack)) handle = save_stack(GFP_NOWAIT | __GFP_NOWARN); #endif @@ -161,8 +164,8 @@ void __reset_page_owner(struct page *page, unsigned int order) return; for (i = 0; i < (1 << order); i++) { __clear_bit(PAGE_EXT_OWNER_ACTIVE, &page_ext->flags); -#ifdef CONFIG_DEBUG_PAGEALLOC - if (debug_pagealloc_enabled()) { +#ifdef CONFIG_PAGE_OWNER_FREE_STACK + if (static_branch_unlikely(&page_owner_free_stack)) { page_owner = get_page_owner(page_ext); page_owner->free_handle = handle; } @@ -450,14 +453,16 @@ void __dump_page_owner(struct page *page) stack_trace_print(entries, nr_entries, 0); } -#ifdef CONFIG_DEBUG_PAGEALLOC - handle = READ_ONCE(page_owner->free_handle); - if (!handle) { - pr_alert("page_owner free stack trace missing\n"); - } else { - nr_entries = stack_depot_fetch(handle, &entries); - pr_alert("page last free stack trace:\n"); - stack_trace_print(entries, nr_entries, 0); +#ifdef CONFIG_PAGE_OWNER_FREE_STACK + if (static_branch_unlikely(&page_owner_free_stack)) { + handle = READ_ONCE(page_owner->free_handle); + if (!handle) { + pr_alert("page_owner free stack trace missing\n"); + } else { + nr_entries = stack_depot_fetch(handle, &entries); + pr_alert("page last free stack trace:\n"); + stack_trace_print(entries, nr_entries, 0); + } } #endif From patchwork Wed Sep 25 14:30:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 11160811 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DF252924 for ; Wed, 25 Sep 2019 14:31:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B6C4E2146E for ; Wed, 25 Sep 2019 14:31:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B6C4E2146E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 932166B0010; Wed, 25 Sep 2019 10:31:22 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5CE486B000D; Wed, 25 Sep 2019 10:31:22 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3A8E46B000C; Wed, 25 Sep 2019 10:31:22 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0232.hostedemail.com [216.40.44.232]) by kanga.kvack.org (Postfix) with ESMTP id 0840F6B000E for ; Wed, 25 Sep 2019 10:31:21 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 8F795181AC9C6 for ; Wed, 25 Sep 2019 14:31:21 +0000 (UTC) X-FDA: 75973680762.02.lace83_8033cabb8a912 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,vbabka@suse.cz,:akpm@linux-foundation.org::linux-kernel@vger.kernel.org:kasan-dev@googlegroups.com:cai@lca.pw:kirill.shutemov@linux.intel.com:willy@infradead.org:mgorman@techsingularity.net:mhocko@kernel.org:vbabka@suse.cz:kirill@shutemov.name,RULES_HIT:30051:30054:30070:30075,0,RBL:195.135.220.15:@suse.cz:.lbl8.mailshell.net-62.2.6.2 64.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:32,LUA_SUMMARY:none X-HE-Tag: lace83_8033cabb8a912 X-Filterd-Recvd-Size: 4531 Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Sep 2019 14:31:20 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id ECAC4AFC3; Wed, 25 Sep 2019 14:31:18 +0000 (UTC) From: Vlastimil Babka To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, Qian Cai , "Kirill A. Shutemov" , Matthew Wilcox , Mel Gorman , Michal Hocko , Vlastimil Babka , "Kirill A . Shutemov" Subject: [PATCH 3/3] mm, page_owner: rename flag indicating that page is allocated Date: Wed, 25 Sep 2019 16:30:52 +0200 Message-Id: <20190925143056.25853-4-vbabka@suse.cz> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190925143056.25853-1-vbabka@suse.cz> References: <20190925143056.25853-1-vbabka@suse.cz> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Commit 37389167a281 ("mm, page_owner: keep owner info when freeing the page") has introduced a flag PAGE_EXT_OWNER_ACTIVE to indicate that page is tracked as being allocated. Kirril suggested naming it PAGE_EXT_OWNER_ALLOCED to make it more clear, as "active is somewhat loaded term for a page". Suggested-by: Kirill A. Shutemov Signed-off-by: Vlastimil Babka --- include/linux/page_ext.h | 2 +- mm/page_owner.c | 12 ++++++------ 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/include/linux/page_ext.h b/include/linux/page_ext.h index 5e856512bafb..4ca0e176433c 100644 --- a/include/linux/page_ext.h +++ b/include/linux/page_ext.h @@ -18,7 +18,7 @@ struct page_ext_operations { enum page_ext_flags { PAGE_EXT_OWNER, - PAGE_EXT_OWNER_ACTIVE, + PAGE_EXT_OWNER_ALLOCED, #if defined(CONFIG_IDLE_PAGE_TRACKING) && !defined(CONFIG_64BIT) PAGE_EXT_YOUNG, PAGE_EXT_IDLE, diff --git a/mm/page_owner.c b/mm/page_owner.c index f3aeec78822f..f16317e98fda 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -163,7 +163,7 @@ void __reset_page_owner(struct page *page, unsigned int order) if (unlikely(!page_ext)) return; for (i = 0; i < (1 << order); i++) { - __clear_bit(PAGE_EXT_OWNER_ACTIVE, &page_ext->flags); + __clear_bit(PAGE_EXT_OWNER_ALLOCED, &page_ext->flags); #ifdef CONFIG_PAGE_OWNER_FREE_STACK if (static_branch_unlikely(&page_owner_free_stack)) { page_owner = get_page_owner(page_ext); @@ -188,7 +188,7 @@ static inline void __set_page_owner_handle(struct page *page, page_owner->gfp_mask = gfp_mask; page_owner->last_migrate_reason = -1; __set_bit(PAGE_EXT_OWNER, &page_ext->flags); - __set_bit(PAGE_EXT_OWNER_ACTIVE, &page_ext->flags); + __set_bit(PAGE_EXT_OWNER_ALLOCED, &page_ext->flags); page_ext = page_ext_next(page_ext); } @@ -262,7 +262,7 @@ void __copy_page_owner(struct page *oldpage, struct page *newpage) * the new page, which will be freed. */ __set_bit(PAGE_EXT_OWNER, &new_ext->flags); - __set_bit(PAGE_EXT_OWNER_ACTIVE, &new_ext->flags); + __set_bit(PAGE_EXT_OWNER_ALLOCED, &new_ext->flags); } void pagetypeinfo_showmixedcount_print(struct seq_file *m, @@ -322,7 +322,7 @@ void pagetypeinfo_showmixedcount_print(struct seq_file *m, if (unlikely(!page_ext)) continue; - if (!test_bit(PAGE_EXT_OWNER_ACTIVE, &page_ext->flags)) + if (!test_bit(PAGE_EXT_OWNER_ALLOCED, &page_ext->flags)) continue; page_owner = get_page_owner(page_ext); @@ -437,7 +437,7 @@ void __dump_page_owner(struct page *page) return; } - if (test_bit(PAGE_EXT_OWNER_ACTIVE, &page_ext->flags)) + if (test_bit(PAGE_EXT_OWNER_ALLOCED, &page_ext->flags)) pr_alert("page_owner tracks the page as allocated\n"); else pr_alert("page_owner tracks the page as freed\n"); @@ -531,7 +531,7 @@ read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos) * Although we do have the info about past allocation of free * pages, it's not relevant for current memory usage. */ - if (!test_bit(PAGE_EXT_OWNER_ACTIVE, &page_ext->flags)) + if (!test_bit(PAGE_EXT_OWNER_ALLOCED, &page_ext->flags)) continue; page_owner = get_page_owner(page_ext);