From patchwork Fri Mar 10 04:29:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 13168729 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 63231C6FD1F for ; Fri, 10 Mar 2023 04:31:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=YKtPW15DvWCG6c13l2fWUqoW+sYUH8mW9U+qxmZewzI=; b=JIy2f8HmSPymNunOPpt3cyArrw g4MKepejSWdAAFZbLtAfnh5aOa/klA9XIUjKBXAVRiLij/5+21tauLCtg7G4BrdO3Hplzv3praR8/ nYJ4snVVOmOz5WwkUdt+q83LJMBb+/ZqHow/oGzi/G3XlKLY2erxzkd+Lia56IYsIzqIRMhYnfp7K m3DruE2vdnRnRhrj5eZAj+FELiWTCLev2OoKjj88j+6ir1KLQP+ooEy4RqHddasGEidftaAIW4WcO SFxJkaA84WM1+NEsP9G7rnnaF+LaCIYuMPLB8GwSqWilmXjtuZ6Ptu/R8g2XDzMnEutKA4LJ3Y3Fp N1efQQHA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1paUOP-00CzjM-UG; Fri, 10 Mar 2023 04:30:10 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1paUOH-00Czge-7X for linux-arm-kernel@lists.infradead.org; Fri, 10 Mar 2023 04:30:02 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id k6-20020a25e806000000b00a623fca0d0aso4632968ybd.16 for ; Thu, 09 Mar 2023 20:29:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1678422599; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=QkRgpCrNH+soo+fiEkY3prqCJENxuAn2pQWgX5GFHYo=; b=IUwzYC9QwdzKEGQekhpa3YY0njrREkSmoVgmtl99u7nOXC6fV8I6Qj8xuVm2JdNjLn ubRsEea2cY/79WDUcqLNBHNxYbZC0yowAuL8uRs9dKMJlfl6FryHgcWIuU1DGLMuyXBt PEjg/9aOEgNcJYJvM7lq/BUNAYKdWmpfQbreuO0+RIIH2FkduGalkt7xW4w8KqiPD21w 548hOgBBPCkxL0InIdqn8serccLJf2BAmAuuJF5wGZH6yih+2Uf1yosBEuzcLLl+tdHt jMdq1pIgADFODU1v+bGWjfsUjbguZJAQaVh0EbM8oOlgQP5bG02Ujj2O//AQw742BkZw avyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678422599; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QkRgpCrNH+soo+fiEkY3prqCJENxuAn2pQWgX5GFHYo=; b=YwVpskBSFoYsyA6I8ZN0f2f8W+6AvV7yXl1IFzlquPeg+2d/kS09vhHQ997b+7hwIt /KdKA2ossonob/ZV3oXHwgi/OqiMP0hOzxPdZwdcTZytscRn/ygtrZTZZ0sH/lMLLKg7 TJPusDKz2wrgI8AjipO5+az5HeN4WjdDVGA6Yh9A7vYerct0I3qpEjr9eXBGj22oQV0d VChZLx3XflQd9VbqdVguOHAnyVCy81mCev254nzV7tdGi4fUdtVWHi9M/zgKdqwM12q6 l6Rf84mBb2gyWuXXP8AeonvY1XANqx2EVIL9iSdhiKdHvM8zE7Q8Peco5PdCLDS986f1 6CTA== X-Gm-Message-State: AO0yUKVFFGWeoBbVkmq1VDL/tCyISFycmtBPTxZDFqHapCn8EB+uXWBI oW7Xi499AJ+rMJn04Glx+h2PdI0= X-Google-Smtp-Source: AK7set/EOxMMKghq1Vn5wpYfyDCvnNJSf9PCauOlQqG2n14zxfpAuyizvUnSCrDxjZGqveuY2rprJzc= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2d3:205:4760:7b08:a3d0:bc10]) (user=pcc job=sendgmr) by 2002:a81:b289:0:b0:53c:7095:595a with SMTP id q131-20020a81b289000000b0053c7095595amr15950888ywh.7.1678422599352; Thu, 09 Mar 2023 20:29:59 -0800 (PST) Date: Thu, 9 Mar 2023 20:29:13 -0800 In-Reply-To: <20230310042914.3805818-1-pcc@google.com> Message-Id: <20230310042914.3805818-2-pcc@google.com> Mime-Version: 1.0 References: <20230310042914.3805818-1-pcc@google.com> X-Mailer: git-send-email 2.40.0.rc1.284.g88254d51c5-goog Subject: [PATCH v4 1/2] Revert "kasan: drop skip_kasan_poison variable in free_pages_prepare" From: Peter Collingbourne To: catalin.marinas@arm.com, andreyknvl@gmail.com Cc: Peter Collingbourne , linux-mm@kvack.org, kasan-dev@googlegroups.com, ryabinin.a.a@gmail.com, linux-arm-kernel@lists.infradead.org, vincenzo.frascino@arm.com, will@kernel.org, eugenis@google.com, stable@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230309_203001_289669_ABEC35A3 X-CRM114-Status: GOOD ( 15.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This reverts commit 487a32ec24be819e747af8c2ab0d5c515508086a. The should_skip_kasan_poison() function reads the PG_skip_kasan_poison flag from page->flags. However, this line of code in free_pages_prepare(): page->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; clears most of page->flags, including PG_skip_kasan_poison, before calling should_skip_kasan_poison(), which meant that it would never return true as a result of the page flag being set. Therefore, fix the code to call should_skip_kasan_poison() before clearing the flags, as we were doing before the reverted patch. This fixes a measurable performance regression introduced in the reverted commit, where munmap() takes longer than intended if HW tags KASAN is supported and enabled at runtime. Without this patch, we see a single-digit percentage performance regression in a particular mmap()-heavy benchmark when enabling HW tags KASAN, and with the patch, there is no statistically significant performance impact when enabling HW tags KASAN. Signed-off-by: Peter Collingbourne Fixes: 487a32ec24be ("kasan: drop skip_kasan_poison variable in free_pages_prepare") Cc: # 6.1 Link: https://linux-review.googlesource.com/id/Ic4f13affeebd20548758438bb9ed9ca40e312b79 Reviewed-by: Andrey Konovalov --- mm/page_alloc.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1c54790c2d17..c58ebf21ce63 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1413,6 +1413,7 @@ static __always_inline bool free_pages_prepare(struct page *page, unsigned int order, fpi_t fpi_flags) { int bad = 0; + bool skip_kasan_poison = should_skip_kasan_poison(page, fpi_flags); bool init = want_init_on_free(); VM_BUG_ON_PAGE(PageTail(page), page); @@ -1489,7 +1490,7 @@ static __always_inline bool free_pages_prepare(struct page *page, * With hardware tag-based KASAN, memory tags must be set before the * page becomes unavailable via debug_pagealloc or arch_free_page. */ - if (!should_skip_kasan_poison(page, fpi_flags)) { + if (!skip_kasan_poison) { kasan_poison_pages(page, order, init); /* Memory is already initialized if KASAN did it internally. */ From patchwork Fri Mar 10 04:29:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 13168731 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0FD5EC64EC4 for ; Fri, 10 Mar 2023 04:31:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Al3dCGu48Bji/jNHYfMs4XMWHAe/CxxJIL9vDqerJwM=; b=yq+hV1t52sPYEARWEMOAchClHE qgrSGgcEp4Dr4kWwPca9Wn9xEO2PdH15zf+d+XiDjbFs5MO/U1bzNzAVMZ/+pi44ndfgg1RcOQ6D7 8DZ9s46PrC0UF1mhW/caa2E9QjBlsQDOzO5kp5X8euuuMSB2mQklVPqU/iTB4GUFrCDEka8i1/Qho o0e+zD1RdiTPJtSKzgSEG5t/RUHiQQQy4vBfcJVCIE1YTSGdsapbtPev1+BVOx/G0IFoBdtkXlniD xrAl0LUq/QBPdJPpwTBAf3NMZyG7wIjCAYP7f6qc6d/LXkYMBVIeJUoYfdeQDzZ7OsNYWsR4EEA2+ +rmGzGdA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1paUOX-00Czkg-N5; Fri, 10 Mar 2023 04:30:17 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1paUOI-00CzhI-Mw for linux-arm-kernel@lists.infradead.org; Fri, 10 Mar 2023 04:30:05 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id o3-20020a257303000000b00a131264017aso4649172ybc.20 for ; Thu, 09 Mar 2023 20:30:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1678422601; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=hPpibPG6fh6YQoXzVokXBbAJ/B5/ZQPeLynzU+wCe6c=; b=q2JEHoLxDKgc8uS3oejRmJaIA+dgT3DW2ASDjoDksocmCz28T8UmWZ+pzxgikldXIX reh64QIsDUBWEC2/1u4XmdUd1jy4lKDgdM+YAGJd85ctXFurByB2KmpgltL8JM4vXMaa lsGSvFwlzZruRBQ+ftdanrD/6+Re4n68j9mL6Nlgen2SPmdVseHBHf5yLbEv6GZUW7nq GacnDJQPQSwiPxOszvi7eIishEyEIV9xc3DRJLIDGrxlu3eFYIRzEOEgywLcduq2Ibs0 N61+4uasixJVbA+7vkDqbxVinEmMlxhCFo22aOBDFXgC4JEKjKIWbbUUmVoq34YIxCBa 1tzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678422601; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=hPpibPG6fh6YQoXzVokXBbAJ/B5/ZQPeLynzU+wCe6c=; b=c43O10AoEvjuJXTSkt42HjC7HKhdT4RU2zMHrdTjmFTMS5ThMislC9/WO2j7Bv7iGe 6nD5KlRK7Lnw9NxCndKHAR+nFlJ3ftXQdYOuz6iqIKbsmlvHegQvAO/MlWOlkp67w8oV LrN6K/bPCLskE6MGZkO9luz5V20yx5ACEZmdqbrORHXMBozjAM6uumDW2myCG7D2iaXA 3++U0F2iOuKVgtSRhZGsFzi5CA27grP7D5sME4JGiBKg+i98PhM1Aca3scVzYjvn/bw9 q4kb9P8vf/3yC93kHYLW1YjNTtMZGPynkBXrZt82FaK0hLhRH7kVilkianVI6zQ6mtS8 UnAg== X-Gm-Message-State: AO0yUKU9OMc4xh13eWDGD4pcmx0ESpDBupy/MZauYAyH58tyl5x1oQvN gDJMLB/mUyhaD9tslUrOCtWd4Yc= X-Google-Smtp-Source: AK7set87jur81KzCMPz8Y0+5pvv5t8Mj23viiWNXddTvCqZd7E8Ju9et2DF7mTppNXEOC0lDqKeb0Mc= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2d3:205:4760:7b08:a3d0:bc10]) (user=pcc job=sendgmr) by 2002:a81:af4b:0:b0:533:91d2:9d94 with SMTP id x11-20020a81af4b000000b0053391d29d94mr15728006ywj.5.1678422601498; Thu, 09 Mar 2023 20:30:01 -0800 (PST) Date: Thu, 9 Mar 2023 20:29:14 -0800 In-Reply-To: <20230310042914.3805818-1-pcc@google.com> Message-Id: <20230310042914.3805818-3-pcc@google.com> Mime-Version: 1.0 References: <20230310042914.3805818-1-pcc@google.com> X-Mailer: git-send-email 2.40.0.rc1.284.g88254d51c5-goog Subject: [PATCH v4 2/2] kasan: remove PG_skip_kasan_poison flag From: Peter Collingbourne To: catalin.marinas@arm.com, andreyknvl@gmail.com Cc: Peter Collingbourne , linux-mm@kvack.org, kasan-dev@googlegroups.com, ryabinin.a.a@gmail.com, linux-arm-kernel@lists.infradead.org, vincenzo.frascino@arm.com, will@kernel.org, eugenis@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230309_203002_839877_31EFB16F X-CRM114-Status: GOOD ( 29.31 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Code inspection reveals that PG_skip_kasan_poison is redundant with kasantag, because the former is intended to be set iff the latter is the match-all tag. It can also be observed that it's basically pointless to poison pages which have kasantag=0, because any pages with this tag would have been pointed to by pointers with match-all tags, so poisoning the pages would have little to no effect in terms of bug detection. Therefore, change the condition in should_skip_kasan_poison() to check kasantag instead, and remove PG_skip_kasan_poison and associated flags. Signed-off-by: Peter Collingbourne Link: https://linux-review.googlesource.com/id/I57f825f2eaeaf7e8389d6cf4597c8a5821359838 Reviewed-by: Andrey Konovalov --- v4: - rebased to linux-next v3: - update comments v2: - also remove GFP_SKIP_KASAN_POISON and FPI_SKIP_KASAN_POISON - rename GFP_SKIP_KASAN_UNPOISON to GFP_SKIP_KASAN - update comments - simplify control flow by removing reset_tags include/linux/gfp_types.h | 30 ++++++------- include/linux/page-flags.h | 9 ---- include/trace/events/mmflags.h | 13 +----- mm/kasan/hw_tags.c | 2 +- mm/page_alloc.c | 81 +++++++++++++--------------------- mm/vmalloc.c | 2 +- 6 files changed, 47 insertions(+), 90 deletions(-) diff --git a/include/linux/gfp_types.h b/include/linux/gfp_types.h index 5088637fe5c2..6583a58670c5 100644 --- a/include/linux/gfp_types.h +++ b/include/linux/gfp_types.h @@ -47,16 +47,14 @@ typedef unsigned int __bitwise gfp_t; #define ___GFP_ACCOUNT 0x400000u #define ___GFP_ZEROTAGS 0x800000u #ifdef CONFIG_KASAN_HW_TAGS -#define ___GFP_SKIP_ZERO 0x1000000u -#define ___GFP_SKIP_KASAN_UNPOISON 0x2000000u -#define ___GFP_SKIP_KASAN_POISON 0x4000000u +#define ___GFP_SKIP_ZERO 0x1000000u +#define ___GFP_SKIP_KASAN 0x2000000u #else -#define ___GFP_SKIP_ZERO 0 -#define ___GFP_SKIP_KASAN_UNPOISON 0 -#define ___GFP_SKIP_KASAN_POISON 0 +#define ___GFP_SKIP_ZERO 0 +#define ___GFP_SKIP_KASAN 0 #endif #ifdef CONFIG_LOCKDEP -#define ___GFP_NOLOCKDEP 0x8000000u +#define ___GFP_NOLOCKDEP 0x4000000u #else #define ___GFP_NOLOCKDEP 0 #endif @@ -234,25 +232,24 @@ typedef unsigned int __bitwise gfp_t; * memory tags at the same time as zeroing memory has minimal additional * performace impact. * - * %__GFP_SKIP_KASAN_UNPOISON makes KASAN skip unpoisoning on page allocation. - * Only effective in HW_TAGS mode. - * - * %__GFP_SKIP_KASAN_POISON makes KASAN skip poisoning on page deallocation. - * Typically, used for userspace pages. Only effective in HW_TAGS mode. + * %__GFP_SKIP_KASAN makes KASAN skip unpoisoning on page allocation. + * Used for userspace and vmalloc pages; the latter are unpoisoned by + * kasan_unpoison_vmalloc instead. For userspace pages, results in + * poisoning being skipped as well, see should_skip_kasan_poison for + * details. Only effective in HW_TAGS mode. */ #define __GFP_NOWARN ((__force gfp_t)___GFP_NOWARN) #define __GFP_COMP ((__force gfp_t)___GFP_COMP) #define __GFP_ZERO ((__force gfp_t)___GFP_ZERO) #define __GFP_ZEROTAGS ((__force gfp_t)___GFP_ZEROTAGS) #define __GFP_SKIP_ZERO ((__force gfp_t)___GFP_SKIP_ZERO) -#define __GFP_SKIP_KASAN_UNPOISON ((__force gfp_t)___GFP_SKIP_KASAN_UNPOISON) -#define __GFP_SKIP_KASAN_POISON ((__force gfp_t)___GFP_SKIP_KASAN_POISON) +#define __GFP_SKIP_KASAN ((__force gfp_t)___GFP_SKIP_KASAN) /* Disable lockdep for GFP context tracking */ #define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP) /* Room for N __GFP_FOO bits */ -#define __GFP_BITS_SHIFT (27 + IS_ENABLED(CONFIG_LOCKDEP)) +#define __GFP_BITS_SHIFT (26 + IS_ENABLED(CONFIG_LOCKDEP)) #define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1)) /** @@ -335,8 +332,7 @@ typedef unsigned int __bitwise gfp_t; #define GFP_DMA __GFP_DMA #define GFP_DMA32 __GFP_DMA32 #define GFP_HIGHUSER (GFP_USER | __GFP_HIGHMEM) -#define GFP_HIGHUSER_MOVABLE (GFP_HIGHUSER | __GFP_MOVABLE | \ - __GFP_SKIP_KASAN_POISON | __GFP_SKIP_KASAN_UNPOISON) +#define GFP_HIGHUSER_MOVABLE (GFP_HIGHUSER | __GFP_MOVABLE | __GFP_SKIP_KASAN) #define GFP_TRANSHUGE_LIGHT ((GFP_HIGHUSER_MOVABLE | __GFP_COMP | \ __GFP_NOMEMALLOC | __GFP_NOWARN) & ~__GFP_RECLAIM) #define GFP_TRANSHUGE (GFP_TRANSHUGE_LIGHT | __GFP_DIRECT_RECLAIM) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 57287102c5bd..dcda20c47b8f 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -135,9 +135,6 @@ enum pageflags { #ifdef CONFIG_ARCH_USES_PG_ARCH_X PG_arch_2, PG_arch_3, -#endif -#ifdef CONFIG_KASAN_HW_TAGS - PG_skip_kasan_poison, #endif __NR_PAGEFLAGS, @@ -594,12 +591,6 @@ TESTCLEARFLAG(Young, young, PF_ANY) PAGEFLAG(Idle, idle, PF_ANY) #endif -#ifdef CONFIG_KASAN_HW_TAGS -PAGEFLAG(SkipKASanPoison, skip_kasan_poison, PF_HEAD) -#else -PAGEFLAG_FALSE(SkipKASanPoison, skip_kasan_poison) -#endif - /* * PageReported() is used to track reported free pages within the Buddy * allocator. We can use the non-atomic version of the test and set diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h index b28218b7998e..b63e7c0fbbe5 100644 --- a/include/trace/events/mmflags.h +++ b/include/trace/events/mmflags.h @@ -55,8 +55,7 @@ #ifdef CONFIG_KASAN_HW_TAGS #define __def_gfpflag_names_kasan , \ gfpflag_string(__GFP_SKIP_ZERO), \ - gfpflag_string(__GFP_SKIP_KASAN_POISON), \ - gfpflag_string(__GFP_SKIP_KASAN_UNPOISON) + gfpflag_string(__GFP_SKIP_KASAN) #else #define __def_gfpflag_names_kasan #endif @@ -96,13 +95,6 @@ #define IF_HAVE_PG_ARCH_X(_name) #endif -#ifdef CONFIG_KASAN_HW_TAGS -#define IF_HAVE_PG_SKIP_KASAN_POISON(_name) \ - ,{1UL << PG_##_name, __stringify(_name)} -#else -#define IF_HAVE_PG_SKIP_KASAN_POISON(_name) -#endif - #define DEF_PAGEFLAG_NAME(_name) { 1UL << PG_##_name, __stringify(_name) } #define __def_pageflag_names \ @@ -133,8 +125,7 @@ IF_HAVE_PG_HWPOISON(hwpoison) \ IF_HAVE_PG_IDLE(idle) \ IF_HAVE_PG_IDLE(young) \ IF_HAVE_PG_ARCH_X(arch_2) \ -IF_HAVE_PG_ARCH_X(arch_3) \ -IF_HAVE_PG_SKIP_KASAN_POISON(skip_kasan_poison) +IF_HAVE_PG_ARCH_X(arch_3) #define show_page_flags(flags) \ (flags) ? __print_flags(flags, "|", \ diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c index d1bcb0205327..bb4f56e5bdec 100644 --- a/mm/kasan/hw_tags.c +++ b/mm/kasan/hw_tags.c @@ -318,7 +318,7 @@ void *__kasan_unpoison_vmalloc(const void *start, unsigned long size, * Thus, for VM_ALLOC mappings, hardware tag-based KASAN only tags * the first virtual mapping, which is created by vmalloc(). * Tagging the page_alloc memory backing that vmalloc() allocation is - * skipped, see ___GFP_SKIP_KASAN_UNPOISON. + * skipped, see ___GFP_SKIP_KASAN. * * For non-VM_ALLOC allocations, page_alloc memory is tagged as usual. */ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index c58ebf21ce63..680a4d76460e 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -112,17 +112,6 @@ typedef int __bitwise fpi_t; */ #define FPI_TO_TAIL ((__force fpi_t)BIT(1)) -/* - * Don't poison memory with KASAN (only for the tag-based modes). - * During boot, all non-reserved memblock memory is exposed to page_alloc. - * Poisoning all that memory lengthens boot time, especially on systems with - * large amount of RAM. This flag is used to skip that poisoning. - * This is only done for the tag-based KASAN modes, as those are able to - * detect memory corruptions with the memory tags assigned by default. - * All memory allocated normally after boot gets poisoned as usual. - */ -#define FPI_SKIP_KASAN_POISON ((__force fpi_t)BIT(2)) - /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */ static DEFINE_MUTEX(pcp_batch_high_lock); #define MIN_PERCPU_PAGELIST_HIGH_FRACTION (8) @@ -1370,13 +1359,19 @@ static int free_tail_pages_check(struct page *head_page, struct page *page) /* * Skip KASAN memory poisoning when either: * - * 1. Deferred memory initialization has not yet completed, - * see the explanation below. - * 2. Skipping poisoning is requested via FPI_SKIP_KASAN_POISON, - * see the comment next to it. - * 3. Skipping poisoning is requested via __GFP_SKIP_KASAN_POISON, - * see the comment next to it. - * 4. The allocation is excluded from being checked due to sampling, + * 1. For generic KASAN: deferred memory initialization has not yet completed. + * Tag-based KASAN modes skip pages freed via deferred memory initialization + * using page tags instead (see below). + * 2. For tag-based KASAN modes: the page has a match-all KASAN tag, indicating + * that error detection is disabled for accesses via the page address. + * + * Pages will have match-all tags in the following circumstances: + * + * 1. Pages are being initialized for the first time, including during deferred + * memory init; see the call to page_kasan_tag_reset in __init_single_page. + * 2. The allocation was not unpoisoned due to __GFP_SKIP_KASAN, with the + * exception of pages unpoisoned by kasan_unpoison_vmalloc. + * 3. The allocation was excluded from being checked due to sampling, * see the call to kasan_unpoison_pages. * * Poisoning pages during deferred memory init will greatly lengthen the @@ -1392,10 +1387,10 @@ static int free_tail_pages_check(struct page *head_page, struct page *page) */ static inline bool should_skip_kasan_poison(struct page *page, fpi_t fpi_flags) { - return deferred_pages_enabled() || - (!IS_ENABLED(CONFIG_KASAN_GENERIC) && - (fpi_flags & FPI_SKIP_KASAN_POISON)) || - PageSkipKASanPoison(page); + if (IS_ENABLED(CONFIG_KASAN_GENERIC)) + return deferred_pages_enabled(); + + return page_kasan_tag(page) == 0xff; } static void kernel_init_pages(struct page *page, int numpages) @@ -1730,7 +1725,7 @@ void __free_pages_core(struct page *page, unsigned int order) * Bypass PCP and place fresh pages right to the tail, primarily * relevant for memory onlining. */ - __free_pages_ok(page, order, FPI_TO_TAIL | FPI_SKIP_KASAN_POISON); + __free_pages_ok(page, order, FPI_TO_TAIL); } #ifdef CONFIG_NUMA @@ -2396,9 +2391,9 @@ static inline bool should_skip_kasan_unpoison(gfp_t flags) /* * With hardware tag-based KASAN enabled, skip if this has been - * requested via __GFP_SKIP_KASAN_UNPOISON. + * requested via __GFP_SKIP_KASAN. */ - return flags & __GFP_SKIP_KASAN_UNPOISON; + return flags & __GFP_SKIP_KASAN; } static inline bool should_skip_init(gfp_t flags) @@ -2417,7 +2412,6 @@ inline void post_alloc_hook(struct page *page, unsigned int order, bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags) && !should_skip_init(gfp_flags); bool zero_tags = init && (gfp_flags & __GFP_ZEROTAGS); - bool reset_tags = true; int i; set_page_private(page, 0); @@ -2451,37 +2445,22 @@ inline void post_alloc_hook(struct page *page, unsigned int order, /* Take note that memory was initialized by the loop above. */ init = false; } - if (!should_skip_kasan_unpoison(gfp_flags)) { - /* Try unpoisoning (or setting tags) and initializing memory. */ - if (kasan_unpoison_pages(page, order, init)) { - /* Take note that memory was initialized by KASAN. */ - if (kasan_has_integrated_init()) - init = false; - /* Take note that memory tags were set by KASAN. */ - reset_tags = false; - } else { - /* - * KASAN decided to exclude this allocation from being - * (un)poisoned due to sampling. Make KASAN skip - * poisoning when the allocation is freed. - */ - SetPageSkipKASanPoison(page); - } - } - /* - * If memory tags have not been set by KASAN, reset the page tags to - * ensure page_address() dereferencing does not fault. - */ - if (reset_tags) { + if (!should_skip_kasan_unpoison(gfp_flags) && + kasan_unpoison_pages(page, order, init)) { + /* Take note that memory was initialized by KASAN. */ + if (kasan_has_integrated_init()) + init = false; + } else { + /* + * If memory tags have not been set by KASAN, reset the page + * tags to ensure page_address() dereferencing does not fault. + */ for (i = 0; i != 1 << order; ++i) page_kasan_tag_reset(page + i); } /* If memory is still not initialized, initialize it now. */ if (init) kernel_init_pages(page, 1 << order); - /* Propagate __GFP_SKIP_KASAN_POISON to page flags. */ - if (kasan_hw_tags_enabled() && (gfp_flags & __GFP_SKIP_KASAN_POISON)) - SetPageSkipKASanPoison(page); set_page_owner(page, order, gfp_flags); page_table_check_alloc(page, order); diff --git a/mm/vmalloc.c b/mm/vmalloc.c index bef6cf2b4d46..5e60e9792cbf 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3188,7 +3188,7 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, * pages backing VM_ALLOC mapping. Memory is instead * poisoned and zeroed by kasan_unpoison_vmalloc(). */ - gfp_mask |= __GFP_SKIP_KASAN_UNPOISON | __GFP_SKIP_ZERO; + gfp_mask |= __GFP_SKIP_KASAN | __GFP_SKIP_ZERO; } /* Take note that the mapping is PAGE_KERNEL. */