From patchwork Wed Sep 4 16:58:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Brown X-Patchwork-Id: 13791211 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 47683CD4853 for ; Wed, 4 Sep 2024 17:04:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C6EB16B02DD; Wed, 4 Sep 2024 13:04:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C1DF46B02E0; Wed, 4 Sep 2024 13:04:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A71236B02DF; Wed, 4 Sep 2024 13:04:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 865136B02DC for ; Wed, 4 Sep 2024 13:04:22 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 2C591C1330 for ; Wed, 4 Sep 2024 17:04:22 +0000 (UTC) X-FDA: 82527679164.02.4180E2D Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf30.hostedemail.com (Postfix) with ESMTP id 4B38880031 for ; Wed, 4 Sep 2024 17:04:20 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=XMng1+N5; spf=pass (imf30.hostedemail.com: domain of broonie@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=broonie@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725469365; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Ul49FK2RHtYF8Fcv+TVD25yje83riXzdVSo3gED9n88=; b=BF2ioOolZ969jXz5mtn+hnRpxrLId6L94+xxqw8OPFH5m4K+O4lusmKPCMre0ZzJAS7uBJ YdfpN5P03z5Tu24xY4ERzGED4k7lnGc7XbjKWD2TPRH8vYySUh07mB+F0maoU699878tQ9 LStqr6vM3Y0VUGUxTTF228Qdp0myvcs= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725469365; a=rsa-sha256; cv=none; b=SqmRVGXW8l8BSPtGGDDFiEjPD/XybQS9kAomIl9RvT4zHGOCbljO5PlBE44hMh6U5jJPFC 7EiJnZ5EZ7lz+IoMPwSwsJO/v5QJ8+sb3XhDCNnTo56/t4cYxU1qulVSaGf5VJha8vqXNQ ePh7ZmGHFiQ1hZoL88YY0dAsBCXzWV8= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=XMng1+N5; spf=pass (imf30.hostedemail.com: domain of broonie@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=broonie@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 857485C5711; Wed, 4 Sep 2024 17:04:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1E2ECC4CECB; Wed, 4 Sep 2024 17:04:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1725469458; bh=6l7d6i2Hr/t16H3e0PM6ntsqOX0xtByvRrAFpx2RPdc=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=XMng1+N5KmI2Uyam4ahGOmJ7SA6LJYTXKVjx7qG8VnUZ/zUuGhRBCUkXKze2d/h2W qvk9lXkZAy6Y05LO0D8t2nFprSfQ7TcDWzQJu8SE80qNBE+/r6k3g83BIKWOzMfOt/ Dvpd5qWmKHpFvZ2SytdtRA/ewzxfCdXGeHxRiiWyHN4p2CWQTl0lHTEU4UgysXWzN2 eTOvyqyDhTsz/6ctIBuFArpRcUT1qm+ZYrAJVgsZe+irs/+81QH88lWveZJX5wrZrg OTOm7mY8GpWO9fbBe5Sz+TeGAp/qLFj/uFen9cZ5SvcQauQXuNRCmgGBYLVs1L2zqm CIGueYv9nSccQ== From: Mark Brown Date: Wed, 04 Sep 2024 17:58:01 +0100 Subject: [PATCH v2 3/3] mm: Care about shadow stack guard gap when getting an unmapped area MIME-Version: 1.0 Message-Id: <20240904-mm-generic-shadow-stack-guard-v2-3-a46b8b6dc0ed@kernel.org> References: <20240904-mm-generic-shadow-stack-guard-v2-0-a46b8b6dc0ed@kernel.org> In-Reply-To: <20240904-mm-generic-shadow-stack-guard-v2-0-a46b8b6dc0ed@kernel.org> To: Richard Henderson , Ivan Kokshaysky , Matt Turner , Vineet Gupta , Russell King , Guo Ren , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Naveen N Rao , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , Yoshinori Sato , Rich Felker , John Paul Adrian Glaubitz , "David S. Miller" , Andreas Larsson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Chris Zankel , Max Filippov , Andrew Morton , "Liam R. Howlett" , Vlastimil Babka , Lorenzo Stoakes Cc: linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-mm@kvack.org, Rick Edgecombe , Mark Brown X-Mailer: b4 0.15-dev-99b12 X-Developer-Signature: v=1; a=openpgp-sha256; l=2551; i=broonie@kernel.org; h=from:subject:message-id; bh=6l7d6i2Hr/t16H3e0PM6ntsqOX0xtByvRrAFpx2RPdc=; b=owEBbQGS/pANAwAKASTWi3JdVIfQAcsmYgBm2JLp8oAy9lYwTYiPgligR57zoDd7L+at5nOq93BI UyufcpKJATMEAAEKAB0WIQSt5miqZ1cYtZ/in+ok1otyXVSH0AUCZtiS6QAKCRAk1otyXVSH0OADB/ 0ch6Ppo1cUyP74KOdaZaA30djq+AGYfXlBTOUWViXVC9fGK3bpaqsBB0Cfo7Fu6OsGVc21oO0/yUH1 YrAVYn6NPOHjgywpcI1TnkgqUaxfBOur+h4alPeKfO22P8IGuFmDQWUyfu/9w5PhhvsetXQF8HjY3+ VA0L3qH2XKq516PsycNgfFI7aYDz9EL8tCzzuTsMp8jhgeTHcVNNcxzbWDDBDU5qQBsU5P9IwQfOAx OuEH/w/+Xqho79dLxhqBNChGpSjBv7foaNTf0PLA9gi1zV9CAxhWJ6oVaByM60hUmP9mC93njCQXlg DUm9swJvc4113Ub/xAIxFMCk0q1OCk X-Developer-Key: i=broonie@kernel.org; a=openpgp; fpr=3F2568AAC26998F9E813A1C5C3F436CA30F5D8EB X-Rspamd-Queue-Id: 4B38880031 X-Stat-Signature: nad8mk5pekdyqbksejw1dnzx61gtsqec X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1725469460-371582 X-HE-Meta: U2FsdGVkX1+/HgmF0XlGHUxSTdHhlL1sUse52TY/8L1RJFXJHoRnP7P3i0P/Q/d+WTTSpOg7852SKmNl3kQ0TWs+BJTwrwCj+MoSfWX+nLG7BYYWVx/M6oanrUXpLV0TrAr7h2wwGAsDZXODkCUp88HD0UKIJJWDBBvVqhh9xG+RBRGk0K6Ek8oOLHyRrrmcefZ4oroSxcyse77B9FCcaDphJracIAfXJURe9wyvnhIz0/YaGEiMAf/jGHFOtPseZjukRJHlgUAVyCJWrkB5XWxyQeb0MjvHnqBrgKjE7qnb0GSp+GpH4dmBZ6pfqsca4Rej4S9M45cQi3WT22xpSv8KwTxOPFlzLv9NinZoHscJ3Wl9kd2rywuIwkObkQWfDkmdv82S0nIeacHBPQ0AJyqGjP4NUzFknP6eOLowq5P4Xkut8k9IikW3a5K8xd5aGcxpIkpR2VV5MaD0Mza9kmrjKx74puQmBrwBHf4rX8K1XCMTdH5P25qzKAYBmTYDBUXFnnCFc/b72ULjADX07r1xSfq1aTOXB5J0D39mlDeG1DvgLJstySFgONHVReUl+jBLvW7nwAWO13xb4MbhJa641p2vPxJRDzKw5FkTM4mLsiT07EP6/znlLSwcQiybyu7e/E/3MkjzXgUJD6Xu63+KBA6dxgLwx5KJGViIeFXrjeBGIJxoV/ziHpRSslbDCQl+73ut5gH4R/Jui8l9seCkQsIppRmE6Xjj7Dk4rXSOGS1IBLUd/d70yqdsWle+tmIBlntHL/yz346/DtEJMGQ5lDet64CvDPdadcIS08bkOKJTsUSAcqYa4vJf4qqIP8uWnIK30XPu/1ZFjwp+1kjPygGNwplFSTc2/XPDEbokIjOZZ3q+ueZJZtC13u77/QbQfWv7U6DYKf7hnXHdvaAo1Y3FP3YyMx1yp8HwcImpjp1rrFJTcie7a+KY+/WTmtFfQ/LlnOyDrtvs28a 4um4wT9k ED7xLtCr2mcT9c6e8OskPzF3rA5Otmn89aVYXCg9VcnGbgK5UZUJ75sdVXV87FE7smzmhKZurorrzkJPyMyuvTfUjCFOPOfv90+IauFmCgL7t8iyZPXyIsoArDbvVqy4feskZXAiaSjB0N8H+MxakH0zKxSmJKu2idk30c8Ey1EkMJSWeajA+GB4oW9hDVi44O7LpCS8uksRTcm4hpGlWVEZPj2LNJse2n95PWfxspuBClqladu6CStFoaYOlZaLUl+tS90p+KhHFLaIjORZcKFQkF7dMGh614Z5N X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: As covered in the commit log for c44357c2e76b ("x86/mm: care about shadow stack guard gap during placement") our current mmap() implementation does not take care to ensure that a new mapping isn't placed with existing mappings inside it's own guard gaps. This is particularly important for shadow stacks since if two shadow stacks end up getting placed adjacent to each other then they can overflow into each other which weakens the protection offered by the feature. On x86 there is a custom arch_get_unmapped_area() which was updated by the above commit to cover this case by specifying a start_gap for allocations with VM_SHADOW_STACK. Both arm64 and RISC-V have equivalent features and use the generic implementation of arch_get_unmapped_area() so let's make the equivalent change there so they also don't get shadow stack pages placed without guard pages. x86 uses a single page guard, this is also sufficient for arm64 where we either do single word pops and pushes or unconstrained writes. Architectures which do not have this feature will define VM_SHADOW_STACK to VM_NONE and hence be unaffected. Suggested-by: Rick Edgecombe Acked-by: Lorenzo Stoakes Signed-off-by: Mark Brown --- mm/mmap.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/mm/mmap.c b/mm/mmap.c index b06ba847c96e..050c5ae2f80f 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1753,6 +1753,18 @@ static unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info) return gap; } +/* + * Determine if the allocation needs to ensure that there is no + * existing mapping within it's guard gaps, for use as start_gap. + */ +static inline unsigned long stack_guard_placement(vm_flags_t vm_flags) +{ + if (vm_flags & VM_SHADOW_STACK) + return PAGE_SIZE; + + return 0; +} + /* * Search for an unmapped address range. * @@ -1814,6 +1826,7 @@ generic_get_unmapped_area(struct file *filp, unsigned long addr, info.length = len; info.low_limit = mm->mmap_base; info.high_limit = mmap_end; + info.start_gap = stack_guard_placement(vm_flags); return vm_unmapped_area(&info); } @@ -1863,6 +1876,7 @@ generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr, info.length = len; info.low_limit = PAGE_SIZE; info.high_limit = arch_get_mmap_base(addr, mm->mmap_base); + info.start_gap = stack_guard_placement(vm_flags); addr = vm_unmapped_area(&info); /*