From patchwork Fri Jan 10 18:40:41 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brendan Jackman X-Patchwork-Id: 13935244 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A9AFE77188 for ; Fri, 10 Jan 2025 18:41:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DBE038D0006; Fri, 10 Jan 2025 13:41:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D45458D0005; Fri, 10 Jan 2025 13:41:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B23C38D0006; Fri, 10 Jan 2025 13:41:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 8899E8D0005 for ; Fri, 10 Jan 2025 13:41:23 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 3663D1A0D98 for ; Fri, 10 Jan 2025 18:41:23 +0000 (UTC) X-FDA: 82992410046.20.93AD26E Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf02.hostedemail.com (Postfix) with ESMTP id 47D2C80009 for ; Fri, 10 Jan 2025 18:41:21 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=CBUxmKbn; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf02.hostedemail.com: domain of 3z2mBZwgKCOoVMOWYMZNSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--jackmanb.bounces.google.com designates 209.85.221.73 as permitted sender) smtp.mailfrom=3z2mBZwgKCOoVMOWYMZNSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--jackmanb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736534481; a=rsa-sha256; cv=none; b=3oKnGf8WDb8s/u+c4u+Rk9FCeCMGg9J8unJhHhlmKZtj13EzRaOMZ+EhqCpWUkIeQ3aSjO lOuU/lJvPwuazyrj+RzNVnivPmG3l23WFFjAeL7KdA7tsb1LOS2jtB8pIvBeLpkXToKN7R Y2r8aHeQ9J8ZU2NLXeKbH6DIALTPbWM= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=CBUxmKbn; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf02.hostedemail.com: domain of 3z2mBZwgKCOoVMOWYMZNSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--jackmanb.bounces.google.com designates 209.85.221.73 as permitted sender) smtp.mailfrom=3z2mBZwgKCOoVMOWYMZNSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--jackmanb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736534481; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Aw/CnxStV8mvixiIOCl4TLOf7+c8cjSAdFrfDhkuuHg=; b=ibyBp9tq0pKmSj3kGGOpoDzWChFKU1/1c2YX42yiAN5is3K6hYb1slVC+iay1Gx5IVafBN nt4koqr9LoMScS+JF0nhCpEa4+JMcc+4m6UspAfWxxgPw3g0LJGejav8IKtxZebsDh0aqW G2nLCSWGvZ8HciNPT31q1wp8N15v2WM= Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-385ed79291eso1665598f8f.0 for ; Fri, 10 Jan 2025 10:41:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736534480; x=1737139280; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Aw/CnxStV8mvixiIOCl4TLOf7+c8cjSAdFrfDhkuuHg=; b=CBUxmKbnxch3OatJWSa9skxzj/l89hxfhhIeH4mIWXANvy8oa1lIJmlyBx3QY+5Q9M vclknxZtWPkLBIDf2xiRtM0QPKNFPJpA1Ig4NkSG/L5Xxjhedm+D7+gqVkUZf3/NzyXm 1HMlYHus0XMCElelgNNlJmdd/uYfP7VOnDZ8Z25P9ROlKsljqe+DFN5fFpwJqoP8luY3 YSX2w9HXKyLlxXF3LIDMpQUuuD/I18Q+7UB2PxLkPjHWVQFc28SGA18gXLvTaE3HEJ/5 Z1435IEtdOATTSH3rCxSOP996b9G4/uXbA+j6jMue91rKrRWgwz5YX4bOoaKP0pww+td opXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736534480; x=1737139280; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Aw/CnxStV8mvixiIOCl4TLOf7+c8cjSAdFrfDhkuuHg=; b=NTEVhkK9nPGyRtkjrUkPGRUMl1UF1gDanHgsR4iPGqL/ygKvKmwfVm2fgYxC6TeuV7 d6ET2GlrmYRABM57nvA6ZIvvl2z+v9P7ewtg5thglLSVhTsVNWSUlSDOGo+eV4YpBmyj ZFEI2L4IiB00i5ACLtd1nGyq6LveOtOMFqCEMA4USGpr28vBhy09uNY2YUUt+slzlHUL CuO1w7l+8H0y1l20WZeMLGcXSyZV3tS7kPCQeMHmdNfKSaSpllDb+Nw8mUwvf7VqXfJS tzDVTWOZC1PIG4EZu7CEHs0ELdWVq8rOVFX/d67NmfvjqEBR6+8tdR00KHMvN3SM++lW OJvQ== X-Forwarded-Encrypted: i=1; AJvYcCVwkpruaZSSZrMwAo6O0CYVPNUxSju94d7uJ8/yfWXcvWO+VyxZPRjDuLQZnLGfzm1hXufuLAr8eQ==@kvack.org X-Gm-Message-State: AOJu0Yx/ViFip1B+2x77bEe5CqsBf8jIAnt629FXJiux9PzwS3YnLRDH sf/9ZDgrk2+Vkoizl/Qpgf0Z82Dgbcp+eKZWYCnFduzeJrGwX7nuze2kNv+moOJUD4+VZMjivUa V7PHuPozGRA== X-Google-Smtp-Source: AGHT+IEfIL+1aAzwB4pbtK9eyCgzXtWnxn8Tg5skOTY2l9Jdg/lcArfVr7lNF3vncmrBkENqt6o7go6KThD7+Q== X-Received: from wmqe1.prod.google.com ([2002:a05:600c:4e41:b0:434:a050:ddcf]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:1547:b0:386:34af:9bae with SMTP id ffacd0b85a97d-38a8b0b7fd0mr6975597f8f.4.1736534479521; Fri, 10 Jan 2025 10:41:19 -0800 (PST) Date: Fri, 10 Jan 2025 18:40:41 +0000 In-Reply-To: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> Mime-Version: 1.0 References: <20250110-asi-rfc-v2-v2-0-8419288bc805@google.com> X-Mailer: b4 0.15-dev Message-ID: <20250110-asi-rfc-v2-v2-15-8419288bc805@google.com> Subject: [PATCH TEMP WORKAROUND RFC v2 15/29] mm: asi: Workaround missing partial-unmap support From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Richard Henderson , Matt Turner , Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Guo Ren , Brian Cain , Huacai Chen , WANG Xuerui , Geert Uytterhoeven , Michal Simek , Thomas Bogendoerfer , Dinh Nguyen , Jonas Bonn , Stefan Kristiansson , Stafford Horne , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Madhavan Srinivasan , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Richard Weinberger , Anton Ivanov , Johannes Berg , Chris Zankel , Max Filippov , Arnd Bergmann , Andrew Morton , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Uladzislau Rezki , Christoph Hellwig , Masami Hiramatsu , Mathieu Desnoyers , Mike Rapoport , Arnaldo Carvalho de Melo , Namhyung Kim , Mark Rutland , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Dennis Zhou , Tejun Heo , Christoph Lameter , Sean Christopherson , Paolo Bonzini , Ard Biesheuvel , Josh Poimboeuf , Pawan Gupta Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, Brendan Jackman X-Stat-Signature: c9ypfkq6pyxfmo9pmh8zhshiz5rz4giz X-Rspam-User: X-Rspamd-Queue-Id: 47D2C80009 X-Rspamd-Server: rspam08 X-HE-Tag: 1736534481-954122 X-HE-Meta: U2FsdGVkX1/UUM68rFfvwi5+lp7quwK4zj9nyWDepTkkcUqJh0VLxM1wMvR/D3TDaELptmL3ujt+b8R+Mg4clYtqta7wbY2FPhqHRIBbZ4pYdPHmLy1as1bt0Vy6xn9K9B6EmP/Ukb7OIE0P47k0VLIwOCYqb4PBzezLsKl2zQCQtKgirKCUa5pkbUlc6ipUT1QvunI51dtJ+T7Nu2CMUvpDLolsVi3T2uSGxtp/atb3pFdQM7Kz8LeY1rKKfvafepZp1/YspoNmxjsf3xuo/Pn3BapN2GWtU9y8UpMWZikOzzmSRHZtuz3mLN9qICGmXcjVubISop8z7pYcM1R/t/ar3rcVjVn+JMJbasXkAWMITPc6T3w2NMUmtKGeWe/SSNTTOsK0YgkeL3ORoz3AoanNQlVlMA6s68sDIwmyVm1OHm9zYFM8S9UI5+EYwNXDcosHHEOZL9DO/KOYe6LVi3uzIYK7pjL7Ii02N29RtdaSMPNeIckD+WEHWesgylIlhw8d6vkwxAcOYBZbifyT1vZsUX5o9u3Qki2JuSb7UPYSOTpX7P6K7S0ttK7m5pJjRB6EFCwIE689iDEb2qoOGqkXI4xeVPYT1wN/QLXZ0vBcf/JtP4fc+Fm3ACObUoF5vMPnLYid6NQUiKlSqp6/AK73M2lyQgcpqu4tEWE/H7PF8J0ePi6TDdwjWobRghARqFgaLPvoPVedd0rcsOqWE+X6TVZMKtEnuvssERW3ApCcl3zD5h1420y2t4dTSEBMw1cjEyh17P2f/oyx9SyS1a2Tqg00l2O6M8aB5MdacY4W36mli0N6PjAk7PLccCvqo7nQcpOJgkXjYh9YVN9yf7GZTixK4DKqsul0wYrDcuVEX3+zZH5aoZjPx7Wj/7EBl2NDpyBKtPFslS+MTsixlwGNs63CUvr266RNuHnlxG7D2bv67P+2qom/7DWqKDjoayhSTc8nI72PDcFn8oM f9zM6I5R zWxaYDa/gl+1KQQHr/vmt3CAF1tVZuX48X3iZi+kUeowXd7TvFOPU/p0SMHlKOTEzIq+05YGZIUg/wdi81gOsO6Ovs1JgpGkHfuriABol70879BoPL2EWJJuOihQtIv922q98jI/qSLWvAMnUjT8sUViirI28kF4xpde3XhN73aeQw3sRYO99RsVe0eVC9dPmTYkjh5EMMAZ5pDFtzX7Tn0KTi+qzjMm76EMKgBjTBkotm9V7/Su2/Z8oQWj69N9fFaPR5SacvtKhfFTaCKSb4shRBS6rasJc8Og/laV57qL6HT1C0O9XLpMWA2vwMDurIW7i34tHv0RMOJE/dZs7paJ8H3Xgxfc5JHCNTV2gu8hpRe6T7LlmWGJiLn1TXYMfZsev7Vr2f9jghPiE3B0rYxGCneyxuBQe1kspYnaCwUR7IriNm+mJqswpmW359I2qXgeUDWVN3W+HdPxtWlqG8/ImT5XcHirScnTucdVI8tcXyHTuCHb8Igozh4djCt1QEJMYMcpjPuZae76z21En1MOjwhOORj2HdP4rlMhgfU5S5v9oXEnWosP5vg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This is a hack, no need to review it carefully. asi_unmap() doesn't currently work unless it corresponds exactly to an asi_map() of the exact same region. This is mostly harmless (it's only a functional problem if you wanna touch those pages from the ASI critical section) but it's messy. For now, working around the only practical case that appears by moving the asi_map call up the call stack in the page allocator, to the place where we know the actual size the mapping is supposed to end up at. This just removes the main case where that happens. Later, a proper solution for partial unmaps will be needed. Signed-off-by: Brendan Jackman --- mm/page_alloc.c | 40 ++++++++++++++++++++++++++-------------- 1 file changed, 26 insertions(+), 14 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3e98fdfbadddb1f7d71e9e050b63255b2008d167..f96e95032450be90b6567f67915b0b941fc431d8 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4604,22 +4604,20 @@ void __init page_alloc_init_asi(void) } } -static int asi_map_alloced_pages(struct page *page, uint order, gfp_t gfp_mask) +static int asi_map_alloced_pages(struct page *page, size_t size, gfp_t gfp_mask) { if (!static_asi_enabled()) return 0; if (!(gfp_mask & __GFP_SENSITIVE)) { - int err = asi_map_gfp( - ASI_GLOBAL_NONSENSITIVE, page_to_virt(page), - PAGE_SIZE * (1 << order), gfp_mask); + int err = asi_map_gfp(ASI_GLOBAL_NONSENSITIVE, page_to_virt(page), size, gfp_mask); uint i; if (err) return err; - for (i = 0; i < (1 << order); i++) + for (i = 0; i < (size >> PAGE_SHIFT); i++) __SetPageGlobalNonSensitive(page + i); } @@ -4629,7 +4627,7 @@ static int asi_map_alloced_pages(struct page *page, uint order, gfp_t gfp_mask) #else /* CONFIG_MITIGATION_ADDRESS_SPACE_ISOLATION */ static inline -int asi_map_alloced_pages(struct page *pages, uint order, gfp_t gfp_mask) +int asi_map_alloced_pages(struct page *pages, size_t size, gfp_t gfp_mask) { return 0; } @@ -4896,7 +4894,7 @@ struct page *__alloc_pages_noprof(gfp_t gfp, unsigned int order, trace_mm_page_alloc(page, order, alloc_gfp, ac.migratetype); kmsan_alloc_page(page, order, alloc_gfp); - if (page && unlikely(asi_map_alloced_pages(page, order, gfp))) { + if (page && unlikely(asi_map_alloced_pages(page, PAGE_SIZE << order, gfp))) { __free_pages(page, order); page = NULL; } @@ -5118,12 +5116,13 @@ void page_frag_free(void *addr) } EXPORT_SYMBOL(page_frag_free); -static void *make_alloc_exact(unsigned long addr, unsigned int order, - size_t size) +static void *finish_exact_alloc(unsigned long addr, unsigned int order, + size_t size, gfp_t gfp_mask) { if (addr) { unsigned long nr = DIV_ROUND_UP(size, PAGE_SIZE); struct page *page = virt_to_page((void *)addr); + struct page *first = page; struct page *last = page + nr; split_page_owner(page, order, 0); @@ -5132,9 +5131,22 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order, while (page < --last) set_page_refcounted(last); - last = page + (1UL << order); + last = page + (1 << order); for (page += nr; page < last; page++) __free_pages_ok(page, 0, FPI_TO_TAIL); + + /* + * ASI doesn't support partially undoing calls to asi_map, so + * we can only safely free sub-allocations if they were made + * with __GFP_SENSITIVE in the first place. Users of this need + * to map with forced __GFP_SENSITIVE and then here we'll make a + * second asi_map_alloced_pages() call to do any mapping that's + * necessary, but with the exact size. + */ + if (unlikely(asi_map_alloced_pages(first, nr << PAGE_SHIFT, gfp_mask))) { + free_pages_exact(first, size); + return NULL; + } } return (void *)addr; } @@ -5162,8 +5174,8 @@ void *alloc_pages_exact_noprof(size_t size, gfp_t gfp_mask) if (WARN_ON_ONCE(gfp_mask & (__GFP_COMP | __GFP_HIGHMEM))) gfp_mask &= ~(__GFP_COMP | __GFP_HIGHMEM); - addr = get_free_pages_noprof(gfp_mask, order); - return make_alloc_exact(addr, order, size); + addr = get_free_pages_noprof(gfp_mask | __GFP_SENSITIVE, order); + return finish_exact_alloc(addr, order, size, gfp_mask); } EXPORT_SYMBOL(alloc_pages_exact_noprof); @@ -5187,10 +5199,10 @@ void * __meminit alloc_pages_exact_nid_noprof(int nid, size_t size, gfp_t gfp_ma if (WARN_ON_ONCE(gfp_mask & (__GFP_COMP | __GFP_HIGHMEM))) gfp_mask &= ~(__GFP_COMP | __GFP_HIGHMEM); - p = alloc_pages_node_noprof(nid, gfp_mask, order); + p = alloc_pages_node_noprof(nid, gfp_mask | __GFP_SENSITIVE, order); if (!p) return NULL; - return make_alloc_exact((unsigned long)page_address(p), order, size); + return finish_exact_alloc((unsigned long)page_address(p), order, size, gfp_mask); } /**