From patchwork Tue Nov 30 21:39:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: andrey.konovalov@linux.dev X-Patchwork-Id: 12648175 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5313BC433F5 for ; Tue, 30 Nov 2021 21:41:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C40676B0075; Tue, 30 Nov 2021 16:40:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BC7D46B0078; Tue, 30 Nov 2021 16:40:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A69146B007B; Tue, 30 Nov 2021 16:40:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 90F836B0075 for ; Tue, 30 Nov 2021 16:40:53 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 5046B89559 for ; Tue, 30 Nov 2021 21:40:43 +0000 (UTC) X-FDA: 78866916324.07.19507C0 Received: from out0.migadu.com (out0.migadu.com [94.23.1.103]) by imf31.hostedemail.com (Postfix) with ESMTP id B4D871046302 for ; Tue, 30 Nov 2021 21:40:30 +0000 (UTC) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: andrey.konovalov@linux.dev To: Marco Elver , Alexander Potapenko , Vincenzo Frascino , Catalin Marinas , Peter Collingbourne Cc: Andrey Konovalov , Dmitry Vyukov , Andrey Ryabinin , kasan-dev@googlegroups.com, Andrew Morton , linux-mm@kvack.org, Will Deacon , linux-arm-kernel@lists.infradead.org, Evgenii Stepanov , linux-kernel@vger.kernel.org, Andrey Konovalov Subject: [PATCH 02/31] kasan, page_alloc: move tag_clear_highpage out of kernel_init_free_pages Date: Tue, 30 Nov 2021 22:39:08 +0100 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: B4D871046302 Authentication-Results: imf31.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf31.hostedemail.com: domain of andrey.konovalov@linux.dev designates 94.23.1.103 as permitted sender) smtp.mailfrom=andrey.konovalov@linux.dev X-Stat-Signature: gd4es1fgjkpez9ffa46wdhkdwgpwghg1 X-HE-Tag: 1638308430-983060 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Currently, kernel_init_free_pages() serves two purposes: either only zeroes memory or zeroes both memory and memory tags via a different code path. As this function has only two callers, each using only one code path, this behaviour is confusing. This patch pulls the code that zeroes both memory and tags out of kernel_init_free_pages(). As a result of this change, the code in free_pages_prepare() starts to look complicated, but this is improved in the few following patches. Those improvements are not integrated into this patch to make diffs easier to read. This patch does no functional changes. Signed-off-by: Andrey Konovalov Reviewed-by: Alexander Potapenko --- mm/page_alloc.c | 24 +++++++++++++----------- 1 file changed, 13 insertions(+), 11 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index c99566a3b67e..3589333b5b77 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1269,16 +1269,10 @@ static inline bool should_skip_kasan_poison(struct page *page, fpi_t fpi_flags) PageSkipKASanPoison(page); } -static void kernel_init_free_pages(struct page *page, int numpages, bool zero_tags) +static void kernel_init_free_pages(struct page *page, int numpages) { int i; - if (zero_tags) { - for (i = 0; i < numpages; i++) - tag_clear_highpage(page + i); - return; - } - /* s390's use of memset() could override KASAN redzones. */ kasan_disable_current(); for (i = 0; i < numpages; i++) { @@ -1372,7 +1366,7 @@ static __always_inline bool free_pages_prepare(struct page *page, bool init = want_init_on_free(); if (init) - kernel_init_free_pages(page, 1 << order, false); + kernel_init_free_pages(page, 1 << order); if (!skip_kasan_poison) kasan_poison_pages(page, order, init); } @@ -2415,9 +2409,17 @@ inline void post_alloc_hook(struct page *page, unsigned int order, bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags); kasan_unpoison_pages(page, order, init); - if (init) - kernel_init_free_pages(page, 1 << order, - gfp_flags & __GFP_ZEROTAGS); + + if (init) { + if (gfp_flags & __GFP_ZEROTAGS) { + int i; + + for (i = 0; i < 1 << order; i++) + tag_clear_highpage(page + i); + } else { + kernel_init_free_pages(page, 1 << order); + } + } } set_page_owner(page, order, gfp_flags);