From patchwork Tue Mar 26 02:16:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 13603255 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5AFFFC54E64 for ; Tue, 26 Mar 2024 02:17:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5153C6B007B; Mon, 25 Mar 2024 22:17:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4C59B6B0082; Mon, 25 Mar 2024 22:17:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 38D0B6B0083; Mon, 25 Mar 2024 22:17:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 2471B6B007B for ; Mon, 25 Mar 2024 22:17:19 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 9CF58A0587 for ; Tue, 26 Mar 2024 02:17:18 +0000 (UTC) X-FDA: 81937578156.04.DFBCD63 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by imf11.hostedemail.com (Postfix) with ESMTP id 2F3D140004 for ; Tue, 26 Mar 2024 02:17:15 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Lzw0SOWH; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf11.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711419437; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=eJm8lpz0TdCXTRj/Mr4Q33H6qQGbh1S3uqIwNQXlM5o=; b=MIC+hLSdPxd3fJgpAUkK0wS/LoQJR0mkJqou+kvnkPTI06mFL93TyChqqyySAj84W4WCBf +jVI7YjJ40hiXrobKb6MHd8nlwOcRxHTZNul6xS2xhi0ZixLMGvYMypdC2P9oWPRfbnHtA LVpxLyanOVpgy7dh7A+GYd7pHokRn1o= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Lzw0SOWH; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf11.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711419437; a=rsa-sha256; cv=none; b=XDhiIlcXgUKOhPHL3ctSQBX9dEWS+QtaaIf67+cBryoBqXRBVO4TC2SCSff6ZcIWoGXMGT 0nY2196MLxuQEuUZDnKrKtSsSEepgQU2NC+M5+UmBQNZC8kTTACza1QdAIO6qogCOijZLj O7c+llmJaQ5ZoXALx+Ek4aIhoV4fGz4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711419436; x=1742955436; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=C3HKCFW/8Pu4EQqUdufuRIZx47fZhhZskIiwq7U/w8E=; b=Lzw0SOWHPNiaX558/E/Fv/meWpEiMiI4vWXV1oPMJi7s7+J1HrkdG084 jm5UUmaYHhhCt/P9E2PIlLGzeADwYC8snAlSnM05IgaDHTroinMlyfoUz IGj0g/GUDEbWw0N+K5b3BG/S2seaCDPuDZwfaPpYH6XxAB0nPoJU6XI6f Ui8/kBvOBNqK9e+s2i7oEMXEmpE3ARAIawl5kSLQK3F/GiVj6C7YjbdmX b6L8YC7eSYZRjzp2jFNyb9vvEEFOdydKz9ITwVmCGkR0ZlFdhLp1gp4Kr fY94pivLXghVS8HLOiGLzWlWnezpAgQ/mnmSOgtOnQFn0HNT1QuTlJn3K w==; X-IronPort-AV: E=McAfee;i="6600,9927,11024"; a="6564208" X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="6564208" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:14 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="20489864" Received: from rpwilson-mobl.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.251.11.187]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:14 -0700 From: Rick Edgecombe To: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, christophe.leroy@csgroup.eu, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, tglx@linutronix.de, x86@kernel.org Cc: rick.p.edgecombe@intel.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v4 01/14] proc: Refactor pde_get_unmapped_area as prep Date: Mon, 25 Mar 2024 19:16:43 -0700 Message-Id: <20240326021656.202649-2-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240326021656.202649-1-rick.p.edgecombe@intel.com> References: <20240326021656.202649-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 2F3D140004 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: 5313rk84375bwjgayqpx98hoxhgxpapk X-HE-Tag: 1711419435-250761 X-HE-Meta: U2FsdGVkX18KYzZyHB4F8oEnjCXzE1wrg0WA5I5aWd7E09CQTJErdBKanFnOs5nlWMmNL1dwYsvFYFOBJx3rc31BQrUkzMqT5QyUdjedpCluU9EjiaSV2nR4Mlk4CfGbqgDmuaagdSzOanvEghc0v/d2YKpcE/oR66wrw2LdVCOyX5WmrqjpwlV0DK1egc3V5wg/CvCRowwZ4OT2JMYZD57ywQ7NWSjLHeA0lJh9QB+2ZR6iIQGKsw3O1MXLtdOvptQyL2y60s61dav1kHnoK2Bw5OTBCuzRIXD9jBOX+gBPhU97F20zgpy3EGJKO2doFM4ZLEaFYeKQ1aVruzYGAmynRh6ZcK5yv55kMieuWIl8jSdIVyDLSdmm4TAbkf2Z7fqiTH2IpbPrUQqWuzoJ6IbxIM6JPxdO8ojRtpr0dmtpBwa9V34/lJVpMsiZG+moSGzKXCeIYjRt82FUN5ZKOq7c+i4EQHPtD8SmN9u6/ysWVDdH2FABl1DWioc59CXpA0evevSRRO4ONOTUXdMK4dJX/xr+uYvXQ2dI5Qa5YGb34LUAbmzvtRtvtBpcCyrifSbFb0u+0lHeiFg8ugYOQFOw+mZaUTxhfYNbiU8oXAwHCpAtb6mxDaBc47GEVKKnzHonVbYftTXN05ElrA4pL+scwt+p1lXLyDCfrqmEDgzU6AiATXY4TOHicB3ngnhh8xopJZQFxB+c5+qV2uUNeFSIjjnzY6qfSzATcfRMamiJGDEzciSPxZTkazq8pa9fMGDJpZ9xxwuczJgMbDUkNrG1kUZo/iZy0r8C5xB5M70R1IXfqrWtOO87I8dC5MuYs6Nh67CTj6Ju5uSUwSHneRjijPAWN59MN4gqBNaIPegnVv1x/IiJRPgBJgmP1Xlo1pLUV0fty5wBbULYXSYB2ekhNzIk1TGp3iUZzchbBZoU4cjoF/f4fw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Future changes will perform a treewide change to remove the indirect branch that is involved in calling mm->get_unmapped_area(). After doing this, the function will no longer be able to be handled as a function pointer. To make the treewide change diff cleaner and easier to review, refactor pde_get_unmapped_area() such that mm->get_unmapped_area() is called without being stored in a local function pointer. With this in refactoring, follow on changes will be able to simply replace the call site with a future function that calls it directly. Signed-off-by: Rick Edgecombe --- v4: - New patch split from "mm: Switch mm->get_unmapped_area() to a flag" (Christophe Leroy) --- fs/proc/inode.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/fs/proc/inode.c b/fs/proc/inode.c index dcd513dccf55..75396a24fd8c 100644 --- a/fs/proc/inode.c +++ b/fs/proc/inode.c @@ -451,15 +451,12 @@ pde_get_unmapped_area(struct proc_dir_entry *pde, struct file *file, unsigned lo unsigned long len, unsigned long pgoff, unsigned long flags) { - typeof_member(struct proc_ops, proc_get_unmapped_area) get_area; + if (pde->proc_ops->proc_get_unmapped_area) + return pde->proc_ops->proc_get_unmapped_area(file, orig_addr, len, pgoff, flags); - get_area = pde->proc_ops->proc_get_unmapped_area; #ifdef CONFIG_MMU - if (!get_area) - get_area = current->mm->get_unmapped_area; + return current->mm->get_unmapped_area(file, orig_addr, len, pgoff, flags); #endif - if (get_area) - return get_area(file, orig_addr, len, pgoff, flags); return orig_addr; } From patchwork Tue Mar 26 02:16:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 13603257 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE220CD11DD for ; Tue, 26 Mar 2024 02:17:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1DCDA6B0085; Mon, 25 Mar 2024 22:17:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1BD4D6B0088; Mon, 25 Mar 2024 22:17:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EFF686B0087; Mon, 25 Mar 2024 22:17:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id D35BF6B0083 for ; Mon, 25 Mar 2024 22:17:20 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 95328C087B for ; Tue, 26 Mar 2024 02:17:20 +0000 (UTC) X-FDA: 81937578240.24.F62FE95 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by imf06.hostedemail.com (Postfix) with ESMTP id 6BA78180009 for ; Tue, 26 Mar 2024 02:17:18 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=aJ1aYul0; spf=pass (imf06.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711419438; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=T4GjnWyN+gwQ9UWUHwj5u5inbU9b6fqasRsyY2wlUEA=; b=MpVXusntLr3YgP+kQzBNN7JMCjZnf9fGcciCdSgPQFLIokeO9E5+ldCyRLZ1uEHgjWhEI6 P1SmGr98SSjijRjukV0hXEM1uuuhH3Y3h3HYNuiDvbwrdVmWBmRWAmtKXTejh5uzaA0NlB 58icMeJwuANBuLKGd1uoNe9MsNX5Mn4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711419438; a=rsa-sha256; cv=none; b=GE0+iCCjWlGLd3oBisMFdwRTGkjZ3x86jeeAxqwgRrs0laQKLQnPTDDK3iTc5tBVK/PRra dJT8oTYlcl96yaZjXfEAXc9JiEvh6TY4QSYmVmLXeQdDwdgnqAdHSiVl37daLUu0Pm3jbS 4EoOyi3hufxkw9Nl4oquFR1dlnwm414= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=aJ1aYul0; spf=pass (imf06.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711419438; x=1742955438; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vkTcVT9jigW1nbd2RmwlNu3cwMbNkxnWB23GTD8rbXw=; b=aJ1aYul0ARnSVVv04Avty7gYVlb6Kt8cEy2YEST7D+qRn2PdhsjoyruD uh7Xi1Bfh2+fqKcXTFGfr/OQV+cFtt1TE8AxwdRZ3BIac0wKYdb1z91Ii mhikUe5ea0OFn8BZqufIbx6Lpp+TVk767NK5h6nz5eZscpm1IF2MP8TO2 PI5vEjKKAJiYc+ZDbOdv8UTTz3A9hny2Z5T4NthO0xrzUNlOrDwSwiffS rj+ubmWmLwc6EAzcjhZbqAtSq2jX8F8RvfNrXD7apv5Dj3aJFbXzNzKJt wljm9fO9eIktPttrKPQA5GCT6hL5ZgQCoVXFKApA4ChjQ/VCTOxCe9rO7 A==; X-IronPort-AV: E=McAfee;i="6600,9927,11024"; a="6564220" X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="6564220" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:14 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="20489867" Received: from rpwilson-mobl.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.251.11.187]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:14 -0700 From: Rick Edgecombe To: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, christophe.leroy@csgroup.eu, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, tglx@linutronix.de, x86@kernel.org Cc: rick.p.edgecombe@intel.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-sgx@vger.kernel.org, nvdimm@lists.linux.dev, linux-cxl@vger.kernel.org, linux-fsdevel@vger.kernel.org, io-uring@vger.kernel.org, bpf@vger.kernel.org Subject: [PATCH v4 02/14] mm: Switch mm->get_unmapped_area() to a flag Date: Mon, 25 Mar 2024 19:16:44 -0700 Message-Id: <20240326021656.202649-3-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240326021656.202649-1-rick.p.edgecombe@intel.com> References: <20240326021656.202649-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Stat-Signature: sbigw89bmzk1xziq8ew8o1ght61ayc44 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 6BA78180009 X-Rspam-User: X-HE-Tag: 1711419438-54501 X-HE-Meta: U2FsdGVkX1/rrQ8ocwt0Je59K5njPIGTXNEQpRs6INcWUnDrijBNl6l1L293oozchK9LWpdt4qPe13GvDJUgRsVDS7DGZ3G+ye/xLHznoJi7YIBZEFb1jrVDc/e5c7kZlJMEehSvP2D5VY20jn9iYv9BJhGjASp4cLTXpbr3OUoXQMtTqZyX+rXkMXR6Vjff/Dy7syCuvaCfGj3dfqKq0z1YB/kVOPIJcuYvBKLIIiwCyCb3IczSHr9LVxgrXyeqDUP9vL8HHxrCDazUfDHm6gC4f89ouxfWM87veXmVGjCulQ8P0SizmxovEcxwT8bsy+T8aM4aOxVLZQ9NylfNS836bjmiJx99v1tU7xq7dQ2qREzjSAD9GK8Gq73Lv0rkHp0yw55OtcUHxCKSl0Gk/7nb6YmzBWuW3DAtw2RBZoxdbOsG88k3iBzecJI9OrliCkNhhXFEODCP64FUDtTK6aliv9CO+GoD9Yyr4g3aGQXaC6Nd5BixejRUgToA/v5A3JuXtcAIW4nwpALXmk9LjJEhIK54cIXV/dSWXC/ZO+N3ZsizG1eqX+5DI+G41wRRLUKSgYVaW7eiDKZeWZ4RLYmqCtxt61juz4BbPYF8lFc5nQLfPfzX72dGGm9klAsYDPvNpHpc0fgBVDcl7cw3VwH/Cm+EYB68RT/ZB0l+jZliAnyx11NHgUQnO0tVpm8Lat3EcF4jaycYJkGojI/xz+RKn2AXU+UTuHoleWRR9idBtTjf3eiA8KAINJZ/cDFJv6jASgwfTXWC5VqyFL0y5lTDN9uEvtBr57KeALXmOCQILdAHIRu7f76XQUOzH3Kpf5n7seLwy86Y7CTHaLM4raqsYqaQi3wy7i0VchNpS+C5UphxLt6T2njg1sKZNdSbt0rBTu3k4nAv16T1Cj40CdbH7EEdEZRSKagXyu4YaHX81CrQeGhIL4IfjkkrAf1wwEr/jOaLKh+4dy0FuLP OkE4GrEh FxNYtruKeuVQgtgWQMR7jqzMCe+/cfnqBHO1ZWCXBN6zglLZWhL9d+QhOXSclWRL/7JXbUYwLhiqZwmL4skXUB3XmZb1tAYBaYcM4yV9mPJUhYGbuYQkqotCYefGuh82mERjKuNMttp18UySZpTvo9cjd6K7mCVoCYHe4SasCV7TNzrIPXea/X/W+32TS6AN8WfrwgKPwg5r+1kH1+RGEhB9bmZNX6Jcs1fMh9yKMu7GSPFBeGh2EnAkKeZikt8GmSqRCmsrt9UTGaKqlb3fN1ReI+lAlWPMlmGfxLWiy0ZIbm0twEEze7KX//6/epOKjVT9178w9QR7YJgoK3Rj33/9KhZEo8HCqXFHC X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The mm_struct contains a function pointer *get_unmapped_area(), which is set to either arch_get_unmapped_area() or arch_get_unmapped_area_topdown() during the initialization of the mm. Since the function pointer only ever points to two functions that are named the same across all arch's, a function pointer is not really required. In addition future changes will want to add versions of the functions that take additional arguments. So to save a pointers worth of bytes in mm_struct, and prevent adding additional function pointers to mm_struct in future changes, remove it and keep the information about which get_unmapped_area() to use in a flag. Add the new flag to MMF_INIT_MASK so it doesn't get clobbered on fork by mmf_init_flags(). Most MM flags get clobbered on fork. In the pre-existing behavior mm->get_unmapped_area() would get copied to the new mm in dup_mm(), so not clobbering the flag preserves the existing behavior around inheriting the topdown-ness. Introduce a helper, mm_get_unmapped_area(), to easily convert code that refers to the old function pointer to instead select and call either arch_get_unmapped_area() or arch_get_unmapped_area_topdown() based on the flag. Then drop the mm->get_unmapped_area() function pointer. Leave the get_unmapped_area() pointer in struct file_operations alone. The main purpose of this change is to reorganize in preparation for future changes, but it also converts the calls of mm->get_unmapped_area() from indirect branches into a direct ones. The stress-ng bigheap benchmark calls realloc a lot, which calls through get_unmapped_area() in the kernel. On x86, the change yielded a ~1% improvement there on a retpoline config. In testing a few x86 configs, removing the pointer unfortunately didn't result in any actual size reductions in the compiled layout of mm_struct. But depending on compiler or arch alignment requirements, the change could shrink the size of mm_struct. Signed-off-by: Rick Edgecombe Acked-by: Dave Hansen Acked-by: Liam R. Howlett Reviewed-by: Kirill A. Shutemov Cc: linux-s390@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: sparclinux@vger.kernel.org Cc: linux-sgx@vger.kernel.org Cc: nvdimm@lists.linux.dev Cc: linux-cxl@vger.kernel.org Cc: linux-mm@kvack.org Cc: linux-fsdevel@vger.kernel.org Cc: io-uring@vger.kernel.org Cc: bpf@vger.kernel.org Acked-by: Alexei Starovoitov --- v4: - Split out pde_get_unmapped_area() refactor into separate patch (Christophe Leroy) v3: - Fix comment that still referred to mm->get_unmapped_area() - Resolve trivial rebase conflicts with "mm: thp_get_unmapped_area must honour topdown preference" - Spelling fix in log v2: - Fix comment on MMF_TOPDOWN (Kirill, rppt) - Move MMF_TOPDOWN to actually unused bit - Add MMF_TOPDOWN to MMF_INIT_MASK so it doesn't get clobbered on fork, and result in the children using the search up path. - New lower performance results after above bug fix - Add Reviews and Acks --- arch/s390/mm/hugetlbpage.c | 2 +- arch/s390/mm/mmap.c | 4 ++-- arch/sparc/kernel/sys_sparc_64.c | 15 ++++++--------- arch/sparc/mm/hugetlbpage.c | 2 +- arch/x86/kernel/cpu/sgx/driver.c | 2 +- arch/x86/mm/hugetlbpage.c | 2 +- arch/x86/mm/mmap.c | 4 ++-- drivers/char/mem.c | 2 +- drivers/dax/device.c | 6 +++--- fs/hugetlbfs/inode.c | 4 ++-- fs/proc/inode.c | 3 ++- fs/ramfs/file-mmu.c | 2 +- include/linux/mm_types.h | 6 +----- include/linux/sched/coredump.h | 5 ++++- include/linux/sched/mm.h | 5 +++++ io_uring/io_uring.c | 2 +- kernel/bpf/arena.c | 2 +- kernel/bpf/syscall.c | 2 +- mm/debug.c | 6 ------ mm/huge_memory.c | 9 ++++----- mm/mmap.c | 21 ++++++++++++++++++--- mm/shmem.c | 11 +++++------ mm/util.c | 6 +++--- 23 files changed, 66 insertions(+), 57 deletions(-) diff --git a/arch/s390/mm/hugetlbpage.c b/arch/s390/mm/hugetlbpage.c index c2e8242bd15d..219d906fe830 100644 --- a/arch/s390/mm/hugetlbpage.c +++ b/arch/s390/mm/hugetlbpage.c @@ -328,7 +328,7 @@ unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr, goto check_asce_limit; } - if (mm->get_unmapped_area == arch_get_unmapped_area) + if (!test_bit(MMF_TOPDOWN, &mm->flags)) addr = hugetlb_get_unmapped_area_bottomup(file, addr, len, pgoff, flags); else diff --git a/arch/s390/mm/mmap.c b/arch/s390/mm/mmap.c index b14fc0887654..6b2e4436ad4a 100644 --- a/arch/s390/mm/mmap.c +++ b/arch/s390/mm/mmap.c @@ -185,10 +185,10 @@ void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack) */ if (mmap_is_legacy(rlim_stack)) { mm->mmap_base = mmap_base_legacy(random_factor); - mm->get_unmapped_area = arch_get_unmapped_area; + clear_bit(MMF_TOPDOWN, &mm->flags); } else { mm->mmap_base = mmap_base(random_factor, rlim_stack); - mm->get_unmapped_area = arch_get_unmapped_area_topdown; + set_bit(MMF_TOPDOWN, &mm->flags); } } diff --git a/arch/sparc/kernel/sys_sparc_64.c b/arch/sparc/kernel/sys_sparc_64.c index 1e9a9e016237..1dbf7211666e 100644 --- a/arch/sparc/kernel/sys_sparc_64.c +++ b/arch/sparc/kernel/sys_sparc_64.c @@ -218,14 +218,10 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, unsigned long get_fb_unmapped_area(struct file *filp, unsigned long orig_addr, unsigned long len, unsigned long pgoff, unsigned long flags) { unsigned long align_goal, addr = -ENOMEM; - unsigned long (*get_area)(struct file *, unsigned long, - unsigned long, unsigned long, unsigned long); - - get_area = current->mm->get_unmapped_area; if (flags & MAP_FIXED) { /* Ok, don't mess with it. */ - return get_area(NULL, orig_addr, len, pgoff, flags); + return mm_get_unmapped_area(current->mm, NULL, orig_addr, len, pgoff, flags); } flags &= ~MAP_SHARED; @@ -238,7 +234,8 @@ unsigned long get_fb_unmapped_area(struct file *filp, unsigned long orig_addr, u align_goal = (64UL * 1024); do { - addr = get_area(NULL, orig_addr, len + (align_goal - PAGE_SIZE), pgoff, flags); + addr = mm_get_unmapped_area(current->mm, NULL, orig_addr, + len + (align_goal - PAGE_SIZE), pgoff, flags); if (!(addr & ~PAGE_MASK)) { addr = (addr + (align_goal - 1UL)) & ~(align_goal - 1UL); break; @@ -256,7 +253,7 @@ unsigned long get_fb_unmapped_area(struct file *filp, unsigned long orig_addr, u * be obtained. */ if (addr & ~PAGE_MASK) - addr = get_area(NULL, orig_addr, len, pgoff, flags); + addr = mm_get_unmapped_area(current->mm, NULL, orig_addr, len, pgoff, flags); return addr; } @@ -292,7 +289,7 @@ void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack) gap == RLIM_INFINITY || sysctl_legacy_va_layout) { mm->mmap_base = TASK_UNMAPPED_BASE + random_factor; - mm->get_unmapped_area = arch_get_unmapped_area; + clear_bit(MMF_TOPDOWN, &mm->flags); } else { /* We know it's 32-bit */ unsigned long task_size = STACK_TOP32; @@ -303,7 +300,7 @@ void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack) gap = (task_size / 6 * 5); mm->mmap_base = PAGE_ALIGN(task_size - gap - random_factor); - mm->get_unmapped_area = arch_get_unmapped_area_topdown; + set_bit(MMF_TOPDOWN, &mm->flags); } } diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm/hugetlbpage.c index b432500c13a5..38a1bef47efb 100644 --- a/arch/sparc/mm/hugetlbpage.c +++ b/arch/sparc/mm/hugetlbpage.c @@ -123,7 +123,7 @@ hugetlb_get_unmapped_area(struct file *file, unsigned long addr, (!vma || addr + len <= vm_start_gap(vma))) return addr; } - if (mm->get_unmapped_area == arch_get_unmapped_area) + if (!test_bit(MMF_TOPDOWN, &mm->flags)) return hugetlb_get_unmapped_area_bottomup(file, addr, len, pgoff, flags); else diff --git a/arch/x86/kernel/cpu/sgx/driver.c b/arch/x86/kernel/cpu/sgx/driver.c index 262f5fb18d74..22b65a5f5ec6 100644 --- a/arch/x86/kernel/cpu/sgx/driver.c +++ b/arch/x86/kernel/cpu/sgx/driver.c @@ -113,7 +113,7 @@ static unsigned long sgx_get_unmapped_area(struct file *file, if (flags & MAP_FIXED) return addr; - return current->mm->get_unmapped_area(file, addr, len, pgoff, flags); + return mm_get_unmapped_area(current->mm, file, addr, len, pgoff, flags); } #ifdef CONFIG_COMPAT diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c index 5804bbae4f01..6d77c0039617 100644 --- a/arch/x86/mm/hugetlbpage.c +++ b/arch/x86/mm/hugetlbpage.c @@ -141,7 +141,7 @@ hugetlb_get_unmapped_area(struct file *file, unsigned long addr, } get_unmapped_area: - if (mm->get_unmapped_area == arch_get_unmapped_area) + if (!test_bit(MMF_TOPDOWN, &mm->flags)) return hugetlb_get_unmapped_area_bottomup(file, addr, len, pgoff, flags); else diff --git a/arch/x86/mm/mmap.c b/arch/x86/mm/mmap.c index c90c20904a60..a2cabb1c81e1 100644 --- a/arch/x86/mm/mmap.c +++ b/arch/x86/mm/mmap.c @@ -129,9 +129,9 @@ static void arch_pick_mmap_base(unsigned long *base, unsigned long *legacy_base, void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack) { if (mmap_is_legacy()) - mm->get_unmapped_area = arch_get_unmapped_area; + clear_bit(MMF_TOPDOWN, &mm->flags); else - mm->get_unmapped_area = arch_get_unmapped_area_topdown; + set_bit(MMF_TOPDOWN, &mm->flags); arch_pick_mmap_base(&mm->mmap_base, &mm->mmap_legacy_base, arch_rnd(mmap64_rnd_bits), task_size_64bit(0), diff --git a/drivers/char/mem.c b/drivers/char/mem.c index 3c6670cf905f..9b80e622ae80 100644 --- a/drivers/char/mem.c +++ b/drivers/char/mem.c @@ -544,7 +544,7 @@ static unsigned long get_unmapped_area_zero(struct file *file, } /* Otherwise flags & MAP_PRIVATE: with no shmem object beneath it */ - return current->mm->get_unmapped_area(file, addr, len, pgoff, flags); + return mm_get_unmapped_area(current->mm, file, addr, len, pgoff, flags); #else return -ENOSYS; #endif diff --git a/drivers/dax/device.c b/drivers/dax/device.c index 93ebedc5ec8c..47c126d37b59 100644 --- a/drivers/dax/device.c +++ b/drivers/dax/device.c @@ -329,14 +329,14 @@ static unsigned long dax_get_unmapped_area(struct file *filp, if ((off + len_align) < off) goto out; - addr_align = current->mm->get_unmapped_area(filp, addr, len_align, - pgoff, flags); + addr_align = mm_get_unmapped_area(current->mm, filp, addr, len_align, + pgoff, flags); if (!IS_ERR_VALUE(addr_align)) { addr_align += (off - addr_align) & (align - 1); return addr_align; } out: - return current->mm->get_unmapped_area(filp, addr, len, pgoff, flags); + return mm_get_unmapped_area(current->mm, filp, addr, len, pgoff, flags); } static const struct address_space_operations dev_dax_aops = { diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index 6502c7e776d1..3dee18bf47ed 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -249,11 +249,11 @@ generic_hugetlb_get_unmapped_area(struct file *file, unsigned long addr, } /* - * Use mm->get_unmapped_area value as a hint to use topdown routine. + * Use MMF_TOPDOWN flag as a hint to use topdown routine. * If architectures have special needs, they should define their own * version of hugetlb_get_unmapped_area. */ - if (mm->get_unmapped_area == arch_get_unmapped_area_topdown) + if (test_bit(MMF_TOPDOWN, &mm->flags)) return hugetlb_get_unmapped_area_topdown(file, addr, len, pgoff, flags); return hugetlb_get_unmapped_area_bottomup(file, addr, len, diff --git a/fs/proc/inode.c b/fs/proc/inode.c index 75396a24fd8c..d19434e2a58e 100644 --- a/fs/proc/inode.c +++ b/fs/proc/inode.c @@ -455,8 +455,9 @@ pde_get_unmapped_area(struct proc_dir_entry *pde, struct file *file, unsigned lo return pde->proc_ops->proc_get_unmapped_area(file, orig_addr, len, pgoff, flags); #ifdef CONFIG_MMU - return current->mm->get_unmapped_area(file, orig_addr, len, pgoff, flags); + return mm_get_unmapped_area(current->mm, file, orig_addr, len, pgoff, flags); #endif + return orig_addr; } diff --git a/fs/ramfs/file-mmu.c b/fs/ramfs/file-mmu.c index c7a1aa3c882b..b45c7edc3225 100644 --- a/fs/ramfs/file-mmu.c +++ b/fs/ramfs/file-mmu.c @@ -35,7 +35,7 @@ static unsigned long ramfs_mmu_get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) { - return current->mm->get_unmapped_area(file, addr, len, pgoff, flags); + return mm_get_unmapped_area(current->mm, file, addr, len, pgoff, flags); } const struct file_operations ramfs_file_operations = { diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 5240bd7bca33..9313e43123d4 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -777,11 +777,7 @@ struct mm_struct { } ____cacheline_aligned_in_smp; struct maple_tree mm_mt; -#ifdef CONFIG_MMU - unsigned long (*get_unmapped_area) (struct file *filp, - unsigned long addr, unsigned long len, - unsigned long pgoff, unsigned long flags); -#endif + unsigned long mmap_base; /* base of mmap area */ unsigned long mmap_legacy_base; /* base of mmap area in bottom-up allocations */ #ifdef CONFIG_HAVE_ARCH_COMPAT_MMAP_BASES diff --git a/include/linux/sched/coredump.h b/include/linux/sched/coredump.h index 02f5090ffea2..e62ff805cfc9 100644 --- a/include/linux/sched/coredump.h +++ b/include/linux/sched/coredump.h @@ -92,9 +92,12 @@ static inline int get_dumpable(struct mm_struct *mm) #define MMF_VM_MERGE_ANY 30 #define MMF_VM_MERGE_ANY_MASK (1 << MMF_VM_MERGE_ANY) +#define MMF_TOPDOWN 31 /* mm searches top down by default */ +#define MMF_TOPDOWN_MASK (1 << MMF_TOPDOWN) + #define MMF_INIT_MASK (MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK |\ MMF_DISABLE_THP_MASK | MMF_HAS_MDWE_MASK |\ - MMF_VM_MERGE_ANY_MASK) + MMF_VM_MERGE_ANY_MASK | MMF_TOPDOWN_MASK) static inline unsigned long mmf_init_flags(unsigned long flags) { diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index b6543f9d78d6..ed1caa26c8be 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -8,6 +8,7 @@ #include #include #include +#include /* * Routines for handling mm_structs @@ -186,6 +187,10 @@ arch_get_unmapped_area_topdown(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags); +unsigned long mm_get_unmapped_area(struct mm_struct *mm, struct file *filp, + unsigned long addr, unsigned long len, + unsigned long pgoff, unsigned long flags); + unsigned long generic_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 5d4b448fdc50..405bab0a560c 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -3520,7 +3520,7 @@ static unsigned long io_uring_mmu_get_unmapped_area(struct file *filp, #else addr = 0UL; #endif - return current->mm->get_unmapped_area(filp, addr, len, pgoff, flags); + return mm_get_unmapped_area(current->mm, filp, addr, len, pgoff, flags); } #else /* !CONFIG_MMU */ diff --git a/kernel/bpf/arena.c b/kernel/bpf/arena.c index 86571e760dd6..74d566dcd2cb 100644 --- a/kernel/bpf/arena.c +++ b/kernel/bpf/arena.c @@ -314,7 +314,7 @@ static unsigned long arena_get_unmapped_area(struct file *filp, unsigned long ad return -EINVAL; } - ret = current->mm->get_unmapped_area(filp, addr, len * 2, 0, flags); + ret = mm_get_unmapped_area(current->mm, filp, addr, len * 2, 0, flags); if (IS_ERR_VALUE(ret)) return ret; if ((ret >> 32) == ((ret + len - 1) >> 32)) diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index ae2ff73bde7e..dead5e1977d8 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -980,7 +980,7 @@ static unsigned long bpf_get_unmapped_area(struct file *filp, unsigned long addr if (map->ops->map_get_unmapped_area) return map->ops->map_get_unmapped_area(filp, addr, len, pgoff, flags); #ifdef CONFIG_MMU - return current->mm->get_unmapped_area(filp, addr, len, pgoff, flags); + return mm_get_unmapped_area(current->mm, filp, addr, len, pgoff, flags); #else return addr; #endif diff --git a/mm/debug.c b/mm/debug.c index c1c1a6a484e4..37a17f77df9f 100644 --- a/mm/debug.c +++ b/mm/debug.c @@ -180,9 +180,6 @@ EXPORT_SYMBOL(dump_vma); void dump_mm(const struct mm_struct *mm) { pr_emerg("mm %px task_size %lu\n" -#ifdef CONFIG_MMU - "get_unmapped_area %px\n" -#endif "mmap_base %lu mmap_legacy_base %lu\n" "pgd %px mm_users %d mm_count %d pgtables_bytes %lu map_count %d\n" "hiwater_rss %lx hiwater_vm %lx total_vm %lx locked_vm %lx\n" @@ -208,9 +205,6 @@ void dump_mm(const struct mm_struct *mm) "def_flags: %#lx(%pGv)\n", mm, mm->task_size, -#ifdef CONFIG_MMU - mm->get_unmapped_area, -#endif mm->mmap_base, mm->mmap_legacy_base, mm->pgd, atomic_read(&mm->mm_users), atomic_read(&mm->mm_count), diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 9859aa4f7553..cede9ccb84dc 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -824,8 +824,8 @@ static unsigned long __thp_get_unmapped_area(struct file *filp, if (len_pad < len || (off + len_pad) < off) return 0; - ret = current->mm->get_unmapped_area(filp, addr, len_pad, - off >> PAGE_SHIFT, flags); + ret = mm_get_unmapped_area(current->mm, filp, addr, len_pad, + off >> PAGE_SHIFT, flags); /* * The failure might be due to length padding. The caller will retry @@ -843,8 +843,7 @@ static unsigned long __thp_get_unmapped_area(struct file *filp, off_sub = (off - ret) & (size - 1); - if (current->mm->get_unmapped_area == arch_get_unmapped_area_topdown && - !off_sub) + if (test_bit(MMF_TOPDOWN, ¤t->mm->flags) && !off_sub) return ret + size; ret += off_sub; @@ -861,7 +860,7 @@ unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr, if (ret) return ret; - return current->mm->get_unmapped_area(filp, addr, len, pgoff, flags); + return mm_get_unmapped_area(current->mm, filp, addr, len, pgoff, flags); } EXPORT_SYMBOL_GPL(thp_get_unmapped_area); diff --git a/mm/mmap.c b/mm/mmap.c index 6dbda99a47da..224e9ce1e2fd 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1813,7 +1813,8 @@ get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) { unsigned long (*get_area)(struct file *, unsigned long, - unsigned long, unsigned long, unsigned long); + unsigned long, unsigned long, unsigned long) + = NULL; unsigned long error = arch_mmap_check(addr, len, flags); if (error) @@ -1823,7 +1824,6 @@ get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, if (len > TASK_SIZE) return -ENOMEM; - get_area = current->mm->get_unmapped_area; if (file) { if (file->f_op->get_unmapped_area) get_area = file->f_op->get_unmapped_area; @@ -1842,7 +1842,11 @@ get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, if (!file) pgoff = 0; - addr = get_area(file, addr, len, pgoff, flags); + if (get_area) + addr = get_area(file, addr, len, pgoff, flags); + else + addr = mm_get_unmapped_area(current->mm, file, addr, len, + pgoff, flags); if (IS_ERR_VALUE(addr)) return addr; @@ -1857,6 +1861,17 @@ get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, EXPORT_SYMBOL(get_unmapped_area); +unsigned long +mm_get_unmapped_area(struct mm_struct *mm, struct file *file, + unsigned long addr, unsigned long len, + unsigned long pgoff, unsigned long flags) +{ + if (test_bit(MMF_TOPDOWN, &mm->flags)) + return arch_get_unmapped_area_topdown(file, addr, len, pgoff, flags); + return arch_get_unmapped_area(file, addr, len, pgoff, flags); +} +EXPORT_SYMBOL(mm_get_unmapped_area); + /** * find_vma_intersection() - Look up the first VMA which intersects the interval * @mm: The process address space. diff --git a/mm/shmem.c b/mm/shmem.c index 0aad0d9a621b..4078c3a1b2d0 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2273,8 +2273,6 @@ unsigned long shmem_get_unmapped_area(struct file *file, unsigned long uaddr, unsigned long len, unsigned long pgoff, unsigned long flags) { - unsigned long (*get_area)(struct file *, - unsigned long, unsigned long, unsigned long, unsigned long); unsigned long addr; unsigned long offset; unsigned long inflated_len; @@ -2284,8 +2282,8 @@ unsigned long shmem_get_unmapped_area(struct file *file, if (len > TASK_SIZE) return -ENOMEM; - get_area = current->mm->get_unmapped_area; - addr = get_area(file, uaddr, len, pgoff, flags); + addr = mm_get_unmapped_area(current->mm, file, uaddr, len, pgoff, + flags); if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) return addr; @@ -2342,7 +2340,8 @@ unsigned long shmem_get_unmapped_area(struct file *file, if (inflated_len < len) return addr; - inflated_addr = get_area(NULL, uaddr, inflated_len, 0, flags); + inflated_addr = mm_get_unmapped_area(current->mm, NULL, uaddr, + inflated_len, 0, flags); if (IS_ERR_VALUE(inflated_addr)) return addr; if (inflated_addr & ~PAGE_MASK) @@ -4807,7 +4806,7 @@ unsigned long shmem_get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) { - return current->mm->get_unmapped_area(file, addr, len, pgoff, flags); + return mm_get_unmapped_area(current->mm, file, addr, len, pgoff, flags); } #endif diff --git a/mm/util.c b/mm/util.c index 669397235787..8619d353a1aa 100644 --- a/mm/util.c +++ b/mm/util.c @@ -469,17 +469,17 @@ void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack) if (mmap_is_legacy(rlim_stack)) { mm->mmap_base = TASK_UNMAPPED_BASE + random_factor; - mm->get_unmapped_area = arch_get_unmapped_area; + clear_bit(MMF_TOPDOWN, &mm->flags); } else { mm->mmap_base = mmap_base(random_factor, rlim_stack); - mm->get_unmapped_area = arch_get_unmapped_area_topdown; + set_bit(MMF_TOPDOWN, &mm->flags); } } #elif defined(CONFIG_MMU) && !defined(HAVE_ARCH_PICK_MMAP_LAYOUT) void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack) { mm->mmap_base = TASK_UNMAPPED_BASE; - mm->get_unmapped_area = arch_get_unmapped_area; + clear_bit(MMF_TOPDOWN, &mm->flags); } #endif From patchwork Tue Mar 26 02:16:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 13603258 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3496DC54E58 for ; Tue, 26 Mar 2024 02:17:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5DEFE6B0083; Mon, 25 Mar 2024 22:17:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 53B026B0087; Mon, 25 Mar 2024 22:17:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3DBE96B0089; Mon, 25 Mar 2024 22:17:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 123956B0083 for ; Mon, 25 Mar 2024 22:17:21 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id CC33C8028C for ; Tue, 26 Mar 2024 02:17:20 +0000 (UTC) X-FDA: 81937578240.26.51B323E Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by imf11.hostedemail.com (Postfix) with ESMTP id F168040004 for ; Tue, 26 Mar 2024 02:17:18 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Bd3I7aQz; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf11.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711419439; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5ic3mPs7s2z6mq/2rkKIDBh9XLDk3GzkNtuRlzq0a2w=; b=xcQ1cFEo/4nJFLRenLhSazUVc4BGcB3yoIN5LMmarzzf027AUqhC8FdU9hIjU+vnJ0n5r2 S/7lP7KMffO3NzvbsVlqulV0w/2tduJ7ks8kHWhWFTU2XxTbdY4RhhGGHUTE0ixHBDq/8A vNqbJUB0VdkJGQm3uQFtRhlJOI61dKI= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Bd3I7aQz; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf11.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711419439; a=rsa-sha256; cv=none; b=rrBMzRipiT2tVaXcR323L8J1xPg9FqWmBZam8La+Lc3QM1EEO5s3GaOuEnLjRXl3TayNZy irpeDRayhIMoaHw6L9pUBbgkfe7tDW1re9xKx4PVkvog3Gy5kgk9gt25mRt8i15UPS/zN3 SX2B7RgPsMc1z6iGJ41Yq4+9KK/r8qs= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711419439; x=1742955439; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=y5DokqN/WlvuTFZmqP1zrzAjZ8lFdRvaKiOQWgxpins=; b=Bd3I7aQzZh9S83OqJA+ni5vSvIExnKVGMLgL4Ui7CDtK2uQ9z54XmzwE 1KIl0S4w8cX+Jk0pNxho3LjaNBWIOeGNZGaM1IMh4wRUvD1K6aQg7Md5G JM4A/bYyDkzn8mOCEeaMYYPJuT7LMG9z5Gh48laZNQqlSneu3SBXH7Mf/ 6TMPla0pKTLy1Jn5RSdk9NDudIenEeU054WHKOL2koJ7NNk8VkvgBm9UN u09wnTrbowYv3TdVghEZucR2ozxWXYVVhkpHym3AmQW61NTy1iZeQDCuM 2ufGnQgzm/H5PYpuTFzpPiNfrPM40b7G58h4N5fFP9rxUUEnIzZvQFLS4 A==; X-IronPort-AV: E=McAfee;i="6600,9927,11024"; a="6564239" X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="6564239" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="20489872" Received: from rpwilson-mobl.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.251.11.187]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:14 -0700 From: Rick Edgecombe To: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, christophe.leroy@csgroup.eu, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, tglx@linutronix.de, x86@kernel.org Cc: rick.p.edgecombe@intel.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v4 03/14] mm: Introduce arch_get_unmapped_area_vmflags() Date: Mon, 25 Mar 2024 19:16:45 -0700 Message-Id: <20240326021656.202649-4-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240326021656.202649-1-rick.p.edgecombe@intel.com> References: <20240326021656.202649-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: F168040004 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: wq4auogxnbdpst6mgop1yiwpxx59uaz7 X-HE-Tag: 1711419438-565859 X-HE-Meta: U2FsdGVkX18stLfT+Dg1njNDZtn6Cmjaq+UorJQBfz2RDdUHBqh9KVFk8xR7aCw53JuXGfXs1jlguiwyv3KAcwXQDjvw8UDqMXxWOk9covkXg1HvfMs87t6XdlM0L1spXr0qQqsXYCPHw4Q/MyCVymqbhVZCPBSsqcC2DeRIy/bAMo42BPacDmNQMl4k2TabwFUUnh7x19FIJX8A40C28K2a9YsQHyxZ2214Iq7PsPMQ1jlteJ1KADCYI1Nm1HNfIZgScqbvmM0Wtvl/gPPFi2L0oVhdv4upWs4aPjWuJKXN70k20xlslKh8YfR+9qE+0KCQwrEQjpNRBzCQQHt+zz7wJXudacIcdoDCFl1y1I5jGeGub6+uu6x5pXz124+SKnnsmECBVfYzgLPf/zCYjojRE9eSx8ZeXIIZ4b9QlFUlySHzBG07VrsFB5ntM/SpH6w5TgHUWPe1BEK8IeCnUj+7CQqLjN8b/lJUda6MB0jeuHwU1dc/iEwtfiJfDaAyN2H2SELsia+yQsZTVzq+HJJhTANSvtWLswVCwSOGHih2fZUxT3iTe/XgxBwIaaPlSkGhXUDzd/9z4QL1XYbqjxylMvJ3We++5WM1HWkScPGZ1L5gf2i9Q7fC8kwIexqjY/Tfz3t5Bp40bID0QVTxHfPLS8CgtmRpnPCrRiruN+LT9YQJTgHaIHacc1DYxurAEr4zLnbHR477potZwFW8I10ulY1svFAFS3QVPXrcKdAuEiC4bpGqvLyGls9PjztOvvCqFp8MFJv5QEoZd+ZmMxHSC29r5zDZowuj1BWyBNgUC/WrD3x0UdbrBGnUOrBhPMnunzUuVF02PA9yve7Kpnz89KmalL8WgJutn22/GahmMyIQkSRUQ0D6+Eau1bXc0QKjdJmnPtQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When memory is being placed, mmap() will take care to respect the guard gaps of certain types of memory (VM_SHADOWSTACK, VM_GROWSUP and VM_GROWSDOWN). In order to ensure guard gaps between mappings, mmap() needs to consider two things: 1. That the new mapping isn’t placed in an any existing mappings guard gaps. 2. That the new mapping isn’t placed such that any existing mappings are not in *its* guard gaps. The long standing behavior of mmap() is to ensure 1, but not take any care around 2. So for example, if there is a PAGE_SIZE free area, and a mmap() with a PAGE_SIZE size, and a type that has a guard gap is being placed, mmap() may place the shadow stack in the PAGE_SIZE free area. Then the mapping that is supposed to have a guard gap will not have a gap to the adjacent VMA. In order to take the start gap into account, the maple tree search needs to know the size of start gap the new mapping will need. The call chain from do_mmap() to the actual maple tree search looks like this: do_mmap(size, vm_flags, map_flags, ..) mm/mmap.c:get_unmapped_area(size, map_flags, ...) arch_get_unmapped_area(size, map_flags, ...) vm_unmapped_area(struct vm_unmapped_area_info) One option would be to add another MAP_ flag to mean a one page start gap (as is for shadow stack), but this consumes a flag unnecessarily. Another option could be to simply increase the size passed in do_mmap() by the start gap size, and adjust after the fact, but this will interfere with the alignment requirements passed in struct vm_unmapped_area_info, and unknown to mmap.c. Instead, introduce variants of arch_get_unmapped_area/_topdown() that take vm_flags. In future changes, these variants can be used in mmap.c:get_unmapped_area() to allow the vm_flags to be passed through to vm_unmapped_area(), while preserving the normal arch_get_unmapped_area/_topdown() for the existing callers. Signed-off-by: Rick Edgecombe --- v4: - Remove externs (Christophe Leroy) --- include/linux/sched/mm.h | 17 +++++++++++++++++ mm/mmap.c | 28 ++++++++++++++++++++++++++++ 2 files changed, 45 insertions(+) diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index ed1caa26c8be..91546493c43d 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -191,6 +191,23 @@ unsigned long mm_get_unmapped_area(struct mm_struct *mm, struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags); +unsigned long +arch_get_unmapped_area_vmflags(struct file *filp, unsigned long addr, + unsigned long len, unsigned long pgoff, + unsigned long flags, vm_flags_t vm_flags); +unsigned long +arch_get_unmapped_area_topdown_vmflags(struct file *filp, unsigned long addr, + unsigned long len, unsigned long pgoff, + unsigned long flags, vm_flags_t); + +unsigned long mm_get_unmapped_area_vmflags(struct mm_struct *mm, + struct file *filp, + unsigned long addr, + unsigned long len, + unsigned long pgoff, + unsigned long flags, + vm_flags_t vm_flags); + unsigned long generic_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, diff --git a/mm/mmap.c b/mm/mmap.c index 224e9ce1e2fd..2bd7580b8f0b 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1808,6 +1808,34 @@ arch_get_unmapped_area_topdown(struct file *filp, unsigned long addr, } #endif +#ifndef HAVE_ARCH_UNMAPPED_AREA_VMFLAGS +unsigned long +arch_get_unmapped_area_vmflags(struct file *filp, unsigned long addr, unsigned long len, + unsigned long pgoff, unsigned long flags, vm_flags_t vm_flags) +{ + return arch_get_unmapped_area(filp, addr, len, pgoff, flags); +} + +unsigned long +arch_get_unmapped_area_topdown_vmflags(struct file *filp, unsigned long addr, + unsigned long len, unsigned long pgoff, + unsigned long flags, vm_flags_t vm_flags) +{ + return arch_get_unmapped_area_topdown(filp, addr, len, pgoff, flags); +} +#endif + +unsigned long mm_get_unmapped_area_vmflags(struct mm_struct *mm, struct file *filp, + unsigned long addr, unsigned long len, + unsigned long pgoff, unsigned long flags, + vm_flags_t vm_flags) +{ + if (test_bit(MMF_TOPDOWN, &mm->flags)) + return arch_get_unmapped_area_topdown_vmflags(filp, addr, len, pgoff, + flags, vm_flags); + return arch_get_unmapped_area_vmflags(filp, addr, len, pgoff, flags, vm_flags); +} + unsigned long get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) From patchwork Tue Mar 26 02:16:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 13603259 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E3B6C54E64 for ; Tue, 26 Mar 2024 02:17:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E16506B0088; Mon, 25 Mar 2024 22:17:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DC8366B0089; Mon, 25 Mar 2024 22:17:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C67A56B008A; Mon, 25 Mar 2024 22:17:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id AE7786B0088 for ; Mon, 25 Mar 2024 22:17:21 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 851FBA0456 for ; Tue, 26 Mar 2024 02:17:21 +0000 (UTC) X-FDA: 81937578282.22.BE3ACF1 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by imf02.hostedemail.com (Postfix) with ESMTP id ABFF58000A for ; Tue, 26 Mar 2024 02:17:19 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=VrgkwJWe; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf02.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711419439; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9iNRvc7iFJ+XFy7R8MCKvmRON4n0WTGmKnRuJ6853Sc=; b=PyspRc+LX434zoxdt03Gi61aYSX2A5bz7y26jCsdnNYCyc6RSTlUiKoQ8ZVDMmZ5CCMuAO 5ze6NbXHTNiJReuM/qXnAmd9yICd5NavIL9ItlCaZQERzTbIzko0Y6Ywu29GT+vhq6s4EZ +mHHdNdtZFr/ZU+riX/XGE0RQzucL8E= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=VrgkwJWe; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf02.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711419439; a=rsa-sha256; cv=none; b=4D7h2RZZCfmEEJBZifSu4xg29kWmvPtNn/ubZ3f2pqGIYdKhuHt32U9v7kg+8dqMDFlilv z90far7GWHkeWRoZDSCohYa9acIckDDQ01iUH8rx7YlXKhxxUAnScSCSiK8D6g2aA8rmS7 m8Ho4GvM0OU0YKyVaAt1As0VMra/mxs= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711419440; x=1742955440; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=naXFIdNU33tz38vUXdohRFxUFMIrzK3pZH6Hd6XFg2c=; b=VrgkwJWeKwCqBqlfIW04HHa8jh9p6KYdEXdcEGCUNaMffJwuSHneSlaV qOPlZbBFcrhZpk9glOENs8VGniuRlR26eOzRVlPkqqccwSIjSYCSeTLB7 a2frroVQbeGzAxywyQ9wlKo3qjwGQPV2Rf2/DgWjketJczu7P6VwakimP x307rXKhNrAiGmve7C2zFNCHknSJTi6oe4Cu5XmdXfkvEs8TJmGyAg5U6 oqdyDz8eWzWHRH6chgXF/kTPSRZkerScJ4GOKyNU1oFhKnkuC0Jg1rt+f rutcX5LfR7b7UuBHBK+1lCBCeo20TA+6FqA1F98HEuvtvOzdosAjFe4Nl Q==; X-IronPort-AV: E=McAfee;i="6600,9927,11024"; a="6564246" X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="6564246" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="20489877" Received: from rpwilson-mobl.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.251.11.187]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:15 -0700 From: Rick Edgecombe To: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, christophe.leroy@csgroup.eu, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, tglx@linutronix.de, x86@kernel.org Cc: rick.p.edgecombe@intel.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v4 04/14] mm: Remove export for get_unmapped_area() Date: Mon, 25 Mar 2024 19:16:46 -0700 Message-Id: <20240326021656.202649-5-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240326021656.202649-1-rick.p.edgecombe@intel.com> References: <20240326021656.202649-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: ABFF58000A X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: 6iyaaj8bmjz85z9wsjoqd5ofocokan7i X-HE-Tag: 1711419439-398090 X-HE-Meta: U2FsdGVkX1+b7da2KhhxUh9l012O5XTiFmOu6w0Vga+8zDWQRih3E/ZegN/BwmDX72SmI/I7wheTX1zAuEBeqSSsVC22mQ1Ax0Hswy9IOibiUwWqF4A1/pm0YW9LEj3QGyKskOJ3oVLpPjMZoEPPbxdnrXBi0Ep96lEgRdunlSqEq+oA4TmJo5AmBBlEITw/u4vo0gi9uhBProk1cQAgP4oyDp9SJPSgBR5V3Uz2zSJ82mExacVmB0k6lR/Rj6UFklX+OC5JC9YhVGqHbBKmyWS4+omSs/BPIPht/g684fwoPyzSVz/X6GJmaYoUzBi7OEcgLUMLZ2/XmbJRkqcgxiwcG8WDDEtJ/afUlU+A9pgIc8SHFfVea23p2M2DegAkeqF2/5Y6el7pqDjLbwVaUcJK5anRsEu1gYoqyOGqaPmr0WsMZuOKdxpKTmy7WPS/ZoAjVc4rBrp2ATFL/HZ38l5Lp2V//gA0BilsWZH55aj0VTmqjpfSmQXDgFymvJzMisQ9nJ0ufUwGrPMHUrKDa+vvDZmogAIRkY5IVl4hH2VvttGRpsuwHptXjkMVPwhKeIyRS5A+pikngLCBR9XsXh5gCMiCsd2zOJz1r4OSSbPTvzHjXH/XnjcdP265Vxt9anWwscmQkMxvrOqIE+BC4/dE/yYSQ2DsQq5hKVs7RxpctQ5QBMv0REAV/yDP+7vI3RunUnRMKXZOqLztTDK3fbfZO7SHQpxiUFhfCj3+0Xtrl2raKsspNJ0q7TycA/Y4oObMtqs4O+n6sOd7/GEffuZaN/MWSHwSUgMIFlidVroN3RXW/lYhn+8dqMFyxd5eCM1qPA/WUj5SFQJGIFudKOax3nuTTdC7FwNMEpb5HFTjGlxWwFi6geVbnJ10OVYcjR6tASFG6F89I1avFiRgFs+LmeB/LVgZpOTBQ26MqyNCdmVApFc1zg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The mm/mmap.c function get_unmapped_area() is not used from any modules, so it doesn't need to be exported. Remove the export. Signed-off-by: Rick Edgecombe --- v4: - New patch split from "mm: Use get_unmapped_area_vmflags()" (Christophe Leroy) --- mm/mmap.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/mm/mmap.c b/mm/mmap.c index 2bd7580b8f0b..d160e88b1b1e 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1887,8 +1887,6 @@ get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, return error ? error : addr; } -EXPORT_SYMBOL(get_unmapped_area); - unsigned long mm_get_unmapped_area(struct mm_struct *mm, struct file *file, unsigned long addr, unsigned long len, From patchwork Tue Mar 26 02:16:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 13603261 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BDFAAC54E64 for ; Tue, 26 Mar 2024 02:17:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2F2FC6B008A; Mon, 25 Mar 2024 22:17:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 27EEE6B0092; Mon, 25 Mar 2024 22:17:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0F5786B0093; Mon, 25 Mar 2024 22:17:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id ECC1B6B008A for ; Mon, 25 Mar 2024 22:17:23 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id ABE64120410 for ; Tue, 26 Mar 2024 02:17:23 +0000 (UTC) X-FDA: 81937578366.12.39C6CCB Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by imf06.hostedemail.com (Postfix) with ESMTP id AE5C7180009 for ; Tue, 26 Mar 2024 02:17:20 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="L82hC+A/"; spf=pass (imf06.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711419440; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=EQEX1Sl4PDz4x6EHNcP6aFWHJ51fZXqlA8CNZkXsvNk=; b=bB1kQylqdTTCouy0y+Kf+FqDCogzuSQb+vyZCcxEkK4meR0GyyctRD86TtCPjyHm4KjhV/ UOfZ4kk7OlO24hgyIs1QN4ONhDKJ/LruaXFanpmXItid4z1cy2Y49bZ1XYYf4EYnb/nRbq HY8K/uS019poZMdGwhQCv14g4lZKuGo= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711419440; a=rsa-sha256; cv=none; b=wlRo6ZE0ndOEUlBQRs0S2RuYRuZp7zjZdHUiJGMs1b0YKbjaI5t8//P/A2I4tMcxb02+u8 GVk37bGW4ZniKmOw+xF/r+Q86fn/a75uyL5+07vL06bu15qZlVolMz8+6F1BQmSxvauxlI t5b5GHh7/0CwMScJvmGY9k27PUsqSdo= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="L82hC+A/"; spf=pass (imf06.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711419441; x=1742955441; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Bt/iCncZprnekNGl7/DyAAsQ7Fn3bf5rVZDGMJUr1K4=; b=L82hC+A/6EW0y1SMt8YL4QpRTkuk1i8S8gig5J7W7V+z2nYPSHVTnRj1 6/LzSsMHw+AUcFhTim27SeNCp5odv7CaofDtay8SL5Q30IDGBZZhDntoF wHDQ3PDfP1UafnxMu20DjXTHcopvwxmiGZTcevzEq0lbksvRUFrSVpsrU kOPzfDh2EbBGImKc5+GU8ja/TrEklAeqZE/76hvqyOekuWw/S9FhaShK7 Su4L1r4c4EJPUipnJmPhcjTYmPEoUGy8ledTDJsT0P7SS5cLH/7sg3a/4 PaliTse9w+0rA0uDCoY6M2wy1SMTW6y9xTITkxZViuBXmY9s2cJxfJMJp A==; X-IronPort-AV: E=McAfee;i="6600,9927,11024"; a="6564262" X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="6564262" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="20489883" Received: from rpwilson-mobl.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.251.11.187]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:15 -0700 From: Rick Edgecombe To: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, christophe.leroy@csgroup.eu, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, tglx@linutronix.de, x86@kernel.org Cc: rick.p.edgecombe@intel.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v4 05/14] mm: Use get_unmapped_area_vmflags() Date: Mon, 25 Mar 2024 19:16:47 -0700 Message-Id: <20240326021656.202649-6-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240326021656.202649-1-rick.p.edgecombe@intel.com> References: <20240326021656.202649-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Stat-Signature: ozo7o8u88y4fwz93xrwadbsthzgrdams X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: AE5C7180009 X-Rspam-User: X-HE-Tag: 1711419440-348654 X-HE-Meta: U2FsdGVkX1/y4KWZ9kIvR8TFNh6MX5eTK2hFlhbjOg2OFzEY5kevYqMzue5yBmiRYkE3UTd7LwLzRRBJY97Y0UPPCDhdFW5tJsNPSsjaSBcd8L2aNXPL6z6wF6xHpW7vNvLUTmC7n/3cNZQSbhlCsx0HDqU+t1yk/L0Z4Eox2aNkeOyRqmJplNvWPLzbqjALWpoSDR+cwskqdja+inCbDqLfJQXCuDMoS71A198mLGrfCWtOKNXjPdOGu9ld31Dj9R+PGeaXoBI1cMBzU0pXugPmCy4ZgBlEKWRUcVGWl0Payi8ZWJP1bAw850yjRy1KIS3jCv748ISfJg0+mSfSmAZP1ArvMOzwrbXkV0hFJyKwvyGqAgb5YCK7YLFhvZzVJhs0rXWqSK9AKFRfA3vbnk20oqE6wWkMMvqR15Q7r10UQAxFBqsRI3cXocshiV36kfsuYS3hEbSXdg5S4KSXUr78093tKCj6cfRmdI+UrvTct2fpV+GmrQ84vGOfBqhUfA7NdCXXYzp8Ug32ssA1IELL5rMtwY4DNhEF4pFSXq+Q2I4X4dTT79M1ckl3QCWve78Eg9DaPlQdxTW7i4xMQ0bg5u+tgft9L7xGLhXyaMFMpujEEUEhXdtHExpHDO/bz2ImdrefmNofi5XaOBL/ZDZXOKGMToUev3vREcBMJCUvx930ijVc2K5EJdhXkwzZ7DPxfwZy6nr98fKVV6XqJ6hs+V9eh8uJ2bO0LeqYra24JM3XaJQxOGoXDeT0KDWVJ59J1p1iUJMjv+wvtRV98pmj59NFWvyK5dt2lONqF3Iw8yDEJ7Miig6Zn9hiZabI6qv5GGFnesD40d47sIFXzh8gumcaRIaI3Asl3s2iKHedO/m3R5NG1RjYNLmApWOTOaC+nI9NDrvDAskQbxBZDPlm7ABngboyIVmHGcrFYtmPA4IBql6EQ31NPskdSF64w2zkevcV2iuSHKkgizS YV2pO7a8 P3dedOxZIBzBj1ceUdd+uUAxbQYQrSWrQQQVeSy/T77tOKco+aNMPHujSnRxOeDnheu1L6TkHmWRho+QP2V4N4EWbxm00oWF4H9RW3PmWbEzIvlPNjaZ/uR+FvQpEh+1uzSJtjvIBuNb/zjiUGmyPqJgghVWBANbTKOScp2JHbkwv5S7nf7Nvtd4xH5ULYIApID8iJ7WCT4lvZ7wBliwiy3Yw3n6U4PMEu9xln4X1Y7HgFevTju9PlMWcOkfcngvzWeopGcnY1/ttovccVsTz19pWI5HYsWBN8oZoNUmijrvQUQ8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When memory is being placed, mmap() will take care to respect the guard gaps of certain types of memory (VM_SHADOWSTACK, VM_GROWSUP and VM_GROWSDOWN). In order to ensure guard gaps between mappings, mmap() needs to consider two things: 1. That the new mapping isn’t placed in an any existing mappings guard gaps. 2. That the new mapping isn’t placed such that any existing mappings are not in *its* guard gaps. The long standing behavior of mmap() is to ensure 1, but not take any care around 2. So for example, if there is a PAGE_SIZE free area, and a mmap() with a PAGE_SIZE size, and a type that has a guard gap is being placed, mmap() may place the shadow stack in the PAGE_SIZE free area. Then the mapping that is supposed to have a guard gap will not have a gap to the adjacent VMA. Use mm_get_unmapped_area_vmflags() in the do_mmap() so future changes can cause shadow stack mappings to be placed with a guard gap. Also use the THP variant that takes vm_flags, such that THP shadow stack can get the same treatment. Adjust the vm_flags calculation to happen earlier so that the vm_flags can be passed into __get_unmapped_area(). Signed-off-by: Rick Edgecombe Reviewed-by: Christophe Leroy --- v4: - Split removal of get_unmapped_area() export into a separate patch (Christophe Leroy) v2: - Make get_unmapped_area() a static inline (Kirill) --- include/linux/mm.h | 11 ++++++++++- mm/mmap.c | 32 ++++++++++++++++---------------- 2 files changed, 26 insertions(+), 17 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 0436b919f1c7..8b13cd891b53 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3383,7 +3383,16 @@ extern int install_special_mapping(struct mm_struct *mm, unsigned long randomize_stack_top(unsigned long stack_top); unsigned long randomize_page(unsigned long start, unsigned long range); -extern unsigned long get_unmapped_area(struct file *, unsigned long, unsigned long, unsigned long, unsigned long); +unsigned long +__get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, + unsigned long pgoff, unsigned long flags, vm_flags_t vm_flags); + +static inline unsigned long +get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, + unsigned long pgoff, unsigned long flags) +{ + return __get_unmapped_area(file, addr, len, pgoff, flags, 0); +} extern unsigned long mmap_region(struct file *file, unsigned long addr, unsigned long len, vm_flags_t vm_flags, unsigned long pgoff, diff --git a/mm/mmap.c b/mm/mmap.c index d160e88b1b1e..68b5bfcebadd 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1255,18 +1255,6 @@ unsigned long do_mmap(struct file *file, unsigned long addr, if (mm->map_count > sysctl_max_map_count) return -ENOMEM; - /* Obtain the address to map to. we verify (or select) it and ensure - * that it represents a valid section of the address space. - */ - addr = get_unmapped_area(file, addr, len, pgoff, flags); - if (IS_ERR_VALUE(addr)) - return addr; - - if (flags & MAP_FIXED_NOREPLACE) { - if (find_vma_intersection(mm, addr, addr + len)) - return -EEXIST; - } - if (prot == PROT_EXEC) { pkey = execute_only_pkey(mm); if (pkey < 0) @@ -1280,6 +1268,18 @@ unsigned long do_mmap(struct file *file, unsigned long addr, vm_flags |= calc_vm_prot_bits(prot, pkey) | calc_vm_flag_bits(flags) | mm->def_flags | VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC; + /* Obtain the address to map to. we verify (or select) it and ensure + * that it represents a valid section of the address space. + */ + addr = __get_unmapped_area(file, addr, len, pgoff, flags, vm_flags); + if (IS_ERR_VALUE(addr)) + return addr; + + if (flags & MAP_FIXED_NOREPLACE) { + if (find_vma_intersection(mm, addr, addr + len)) + return -EEXIST; + } + if (flags & MAP_LOCKED) if (!can_do_mlock()) return -EPERM; @@ -1837,8 +1837,8 @@ unsigned long mm_get_unmapped_area_vmflags(struct mm_struct *mm, struct file *fi } unsigned long -get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, - unsigned long pgoff, unsigned long flags) +__get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, + unsigned long pgoff, unsigned long flags, vm_flags_t vm_flags) { unsigned long (*get_area)(struct file *, unsigned long, unsigned long, unsigned long, unsigned long) @@ -1873,8 +1873,8 @@ get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, if (get_area) addr = get_area(file, addr, len, pgoff, flags); else - addr = mm_get_unmapped_area(current->mm, file, addr, len, - pgoff, flags); + addr = mm_get_unmapped_area_vmflags(current->mm, file, addr, len, + pgoff, flags, vm_flags); if (IS_ERR_VALUE(addr)) return addr; From patchwork Tue Mar 26 02:16:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 13603260 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CC19C54E58 for ; Tue, 26 Mar 2024 02:17:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 404A66B0089; Mon, 25 Mar 2024 22:17:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 390236B008A; Mon, 25 Mar 2024 22:17:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1B9206B0092; Mon, 25 Mar 2024 22:17:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 0578C6B0089 for ; Mon, 25 Mar 2024 22:17:23 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id CA77CA0B14 for ; Tue, 26 Mar 2024 02:17:22 +0000 (UTC) X-FDA: 81937578324.14.EECAEDF Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by imf11.hostedemail.com (Postfix) with ESMTP id EE70540004 for ; Tue, 26 Mar 2024 02:17:20 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=T58uWeek; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf11.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711419441; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=elbBxxifLAFPS4IEf/BcjZRxiLxQh7YP6fHqF4IoEZk=; b=g7Jsi/AkRXZi5i9kZsO8VXWjrbWQP8MNnhjxodjLsHYtq0LsLufRJoJiTheDXXrmmyvm65 BKkraeQGR3ozKKV4KLuco6EEZUlosSQWhHnmBYdcW3+hM/l67AVcfmmA+IcUAa3Um1CW2k OhFnv5TmIoG/Y7QCzf07LU4uUN48Gyc= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=T58uWeek; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf11.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711419441; a=rsa-sha256; cv=none; b=zI9AiF+f006ME6+c+ldl97W3+bfz+Q6L1dWylTl6bVWtdNUmvcCnHs8e83Z8UDBbTtBPKF 0gmay9uF9D9OQt5Whisb8ybtMiIQzE4INUj+vzuKZ2eTOprs7uvRBfQSPNYW2E6wnd2HEk 6BcXKFH+8Hx+GIUVfLGzcxjTYnMWir0= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711419441; x=1742955441; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lEMD/P3qLo8sANIiWTYyDSHpg8TQh31/i79RKWObRQk=; b=T58uWeekzdVx1E9qDj/blTq2OIVm/FxSMyaEy2XPEBbVRUQsDH+tbtkZ QKGFVIhad62hBfLJxMRoji0rAFHd3o13kwlLG9SYPkY/o9IElRzBzh9A1 whhgrp458kzr/Q0q5LHazpGM+/Z0G0BBfyL7EXx40LrfrFG/GK25rdqWo 4SNLzgQaQJI5uyzsrxxod6U4s4AASb7YY9YJQeAIJRP1WjhKAcpnHGH8y RViq6AIACQq3xi/gTj3G+RRQGA2rd9yaG/jLtbsntoTfOclBPgoaJoHYZ fp2nR78oZ34RW170kplrpqx6q1zoIjugDYbzW3TYeBR3KQt7iL5Z7vw+K w==; X-IronPort-AV: E=McAfee;i="6600,9927,11024"; a="6564276" X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="6564276" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="20489886" Received: from rpwilson-mobl.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.251.11.187]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:15 -0700 From: Rick Edgecombe To: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, christophe.leroy@csgroup.eu, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, tglx@linutronix.de, x86@kernel.org Cc: rick.p.edgecombe@intel.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v4 06/14] thp: Add thp_get_unmapped_area_vmflags() Date: Mon, 25 Mar 2024 19:16:48 -0700 Message-Id: <20240326021656.202649-7-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240326021656.202649-1-rick.p.edgecombe@intel.com> References: <20240326021656.202649-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: EE70540004 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: fmmjkkhyxyuhr5bomp5z8qdm37c4bei3 X-HE-Tag: 1711419440-882224 X-HE-Meta: U2FsdGVkX18JTmsvKuqC3OGoaDCi0DjWmKO5N42EHowa/rCpSQgkZPzpsa6QWUk6bhCxeE/CY2gez893TFsLdrhFlj2Di+a/bZY8x/9pNgRVtEs/p4N61lKuvU6BaFdDZs+LMPMhYn7IzXQOHgMxNMEBcxTquk5KsKGbc1x6sTSzDYRcZBKqKPywmS6eb1LsUQL5HJ7UJGgnpQkzh8TggZh9m7rN6s/E5z8+3nDjTP1CJ8fLhcexci73EoqivfqPzOed5J95AHQaqyF4VEZu99xAX/YVu6oBIxt9St1SmTEERzDXqGwLZ6HRcX42JRASFsyBWFna48nsPgtGggjP+aFIWnv9f9myTV4fk4sW3Ya43dhgwfjyv1ksOsQBq8mS+ArCDrwBsbYxzH4OyIeonf2sDQx3/4Xmz0oQ2/9dY7rJydyVMiyb3QqvfKpHbsD+tMLDul8WwsqBEYvRvuYy1KPEkzQgMmmzn3mBfEA5H8IoPwAoYJKpCfy1SI9Qek7WSMwlE7Qk34Bw2gDH5mxDL8RNYkMjqEegp3w8V+L79W34eUDGj5UE7JB7b/8fSxqavKl70uj7Sl3s5/vHMTEcntBbSqPDmNx+dn6as7ytKyItAcdlI+x/rgzGYJbXE0NO1ePUjAfR3OyQqmeDpS0PVdk6bGCJ0IfzkFNQmWA1blH0kiDYQBEq6KNEBz8jiCmWNSbNKdI9G48rSXaeScHYxlshIeyAtQ/DNifX2AzfK7WKbWkjQEaB3bgLyMM5H6CSQT8g4/LtRWj1gINZSVU7ZEBAGEMw+DB1SVo40dy4u5r64xPyPjcfspVRYlUdIj6A7qFRGadWDwGLod24BLHTbzyF1crjHBJFw32u5M9SyZPb9/393kYxTwY4z3y4YOe1RhepJkqkVI0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When memory is being placed, mmap() will take care to respect the guard gaps of certain types of memory (VM_SHADOWSTACK, VM_GROWSUP and VM_GROWSDOWN). In order to ensure guard gaps between mappings, mmap() needs to consider two things: 1. That the new mapping isn’t placed in an any existing mappings guard gaps. 2. That the new mapping isn’t placed such that any existing mappings are not in *its* guard gaps. The long standing behavior of mmap() is to ensure 1, but not take any care around 2. So for example, if there is a PAGE_SIZE free area, and a mmap() with a PAGE_SIZE size, and a type that has a guard gap is being placed, mmap() may place the shadow stack in the PAGE_SIZE free area. Then the mapping that is supposed to have a guard gap will not have a gap to the adjacent VMA. Add a THP implementations of the vm_flags variant of get_unmapped_area(). Future changes will call this from mmap.c in the do_mmap() path to allow shadow stacks to be placed with consideration taken for the start guard gap. Shadow stack memory is always private and anonymous and so special guard gap logic is not needed in a lot of caseis, but it can be mapped by THP, so needs to be handled. Signed-off-by: Rick Edgecombe Reviewed-by: Christophe Leroy --- include/linux/huge_mm.h | 11 +++++++++++ mm/huge_memory.c | 23 ++++++++++++++++------- mm/mmap.c | 12 +++++++----- 3 files changed, 34 insertions(+), 12 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index de0c89105076..cc599de5e397 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -262,6 +262,9 @@ unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma, unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags); +unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long addr, + unsigned long len, unsigned long pgoff, unsigned long flags, + vm_flags_t vm_flags); void folio_prep_large_rmappable(struct folio *folio); bool can_split_folio(struct folio *folio, int *pextra_pins); @@ -417,6 +420,14 @@ static inline void folio_prep_large_rmappable(struct folio *folio) {} #define thp_get_unmapped_area NULL +static inline unsigned long +thp_get_unmapped_area_vmflags(struct file *filp, unsigned long addr, + unsigned long len, unsigned long pgoff, + unsigned long flags, vm_flags_t vm_flags) +{ + return 0; +} + static inline bool can_split_folio(struct folio *folio, int *pextra_pins) { diff --git a/mm/huge_memory.c b/mm/huge_memory.c index cede9ccb84dc..b29f3e456888 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -808,7 +808,8 @@ static inline bool is_transparent_hugepage(struct folio *folio) static unsigned long __thp_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, - loff_t off, unsigned long flags, unsigned long size) + loff_t off, unsigned long flags, unsigned long size, + vm_flags_t vm_flags) { loff_t off_end = off + len; loff_t off_align = round_up(off, size); @@ -824,8 +825,8 @@ static unsigned long __thp_get_unmapped_area(struct file *filp, if (len_pad < len || (off + len_pad) < off) return 0; - ret = mm_get_unmapped_area(current->mm, filp, addr, len_pad, - off >> PAGE_SHIFT, flags); + ret = mm_get_unmapped_area_vmflags(current->mm, filp, addr, len_pad, + off >> PAGE_SHIFT, flags, vm_flags); /* * The failure might be due to length padding. The caller will retry @@ -850,17 +851,25 @@ static unsigned long __thp_get_unmapped_area(struct file *filp, return ret; } -unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr, - unsigned long len, unsigned long pgoff, unsigned long flags) +unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long addr, + unsigned long len, unsigned long pgoff, unsigned long flags, + vm_flags_t vm_flags) { unsigned long ret; loff_t off = (loff_t)pgoff << PAGE_SHIFT; - ret = __thp_get_unmapped_area(filp, addr, len, off, flags, PMD_SIZE); + ret = __thp_get_unmapped_area(filp, addr, len, off, flags, PMD_SIZE, vm_flags); if (ret) return ret; - return mm_get_unmapped_area(current->mm, filp, addr, len, pgoff, flags); + return mm_get_unmapped_area_vmflags(current->mm, filp, addr, len, pgoff, flags, + vm_flags); +} + +unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr, + unsigned long len, unsigned long pgoff, unsigned long flags) +{ + return thp_get_unmapped_area_vmflags(filp, addr, len, pgoff, flags, 0); } EXPORT_SYMBOL_GPL(thp_get_unmapped_area); diff --git a/mm/mmap.c b/mm/mmap.c index 68b5bfcebadd..f734e4fa6d94 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1861,20 +1861,22 @@ __get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, * so use shmem's get_unmapped_area in case it can be huge. */ get_area = shmem_get_unmapped_area; - } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) { - /* Ensures that larger anonymous mappings are THP aligned. */ - get_area = thp_get_unmapped_area; } /* Always treat pgoff as zero for anonymous memory. */ if (!file) pgoff = 0; - if (get_area) + if (get_area) { addr = get_area(file, addr, len, pgoff, flags); - else + } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) { + /* Ensures that larger anonymous mappings are THP aligned. */ + addr = thp_get_unmapped_area_vmflags(file, addr, len, + pgoff, flags, vm_flags); + } else { addr = mm_get_unmapped_area_vmflags(current->mm, file, addr, len, pgoff, flags, vm_flags); + } if (IS_ERR_VALUE(addr)) return addr; From patchwork Tue Mar 26 02:16:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 13603263 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 783B6C54E58 for ; Tue, 26 Mar 2024 02:17:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8F22A6B0095; Mon, 25 Mar 2024 22:17:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7FAA86B0096; Mon, 25 Mar 2024 22:17:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5FDE86B0098; Mon, 25 Mar 2024 22:17:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 46CB46B0095 for ; Mon, 25 Mar 2024 22:17:25 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 1AABD140260 for ; Tue, 26 Mar 2024 02:17:25 +0000 (UTC) X-FDA: 81937578450.08.7B3BC4A Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by imf11.hostedemail.com (Postfix) with ESMTP id 180BF40006 for ; Tue, 26 Mar 2024 02:17:22 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=b8bcGTf1; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf11.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711419443; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=F8IQcfzSYV6uD2PMYpqu/lMzJ9DAemZPbD/bC0JYp58=; b=VZegIiWc8oLwamTy+DAsl1FwpruuFkv1e+3GNBnrXl7Ve4gEJdqg6LnYoEl8KVis02JM3o 9aL6vAkrdKWYVgFK/F33Kqles77O3x2Wm7jP7mrrkUuWyFsiPQvPy+lxG9zDs24cDlwIpP m7datuve+77byniajNA6CDZNCYokINc= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=b8bcGTf1; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf11.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711419443; a=rsa-sha256; cv=none; b=NEWP6VA++1OFMdk0ZKrIxoDawy4eDWyyevAfTf6UswJ4BEeWI4bsFTljJJCaTqUDJSLpUa wk51GQVnKWyIcOZUAWTfryfDD38pxsWQE9bbypid1y8VM/tLFgMRIH4nTrO5vNYJ8BcOBX cpfioxHgjn9nKekK5iSum2kuZ87onvY= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711419443; x=1742955443; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=whLNPBFJ+q2+vc2R8+bRAmAZKOkjtqN6kDoBWWZgKCE=; b=b8bcGTf1LHNtTBZ1PVZCFVQ5ANP6fFu2n9cwn+fha9QDYhMvgWuJbfE7 ny+S/wLDNGTIh5rB5bYzDubDVI2v5jifjFyKxB2E4pcCPnJssC2X856xP Mj0ZTjMpGRNgQW8guDsJGc2mLALVXrgPOsJLos6UdN2lCTttlFyCEb6aF Hu6JcBunFBLnNJ/EEDglIky7cjm7omqA9alWOcliynhZSjHRfAX5T/9rx SnfgK2E31ExhJ2APeofY6HCaimflZMh/H/apyCGqBoxKc2R4iU4/1TpmZ RIdKPVnjn79Tg+6tNvd46V1Kvgipv4u/kZcU+T8OkrtSbcPYS40i00w2T g==; X-IronPort-AV: E=McAfee;i="6600,9927,11024"; a="6564289" X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="6564289" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="20489892" Received: from rpwilson-mobl.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.251.11.187]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:16 -0700 From: Rick Edgecombe To: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, christophe.leroy@csgroup.eu, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, tglx@linutronix.de, x86@kernel.org Cc: rick.p.edgecombe@intel.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Guo Ren , linux-csky@vger.kernel.org Subject: [PATCH v4 07/14] csky: Use initializer for struct vm_unmapped_area_info Date: Mon, 25 Mar 2024 19:16:49 -0700 Message-Id: <20240326021656.202649-8-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240326021656.202649-1-rick.p.edgecombe@intel.com> References: <20240326021656.202649-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 180BF40006 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: t1dfr4fazy7ksn7hrarna9ox3bk38rzy X-HE-Tag: 1711419442-932611 X-HE-Meta: U2FsdGVkX18z5f4CERaQC2Eg5Q++PnDwu425GTYOv+qHQdJr5gkdk58r8Y1Fwhftf06XbAZywjeAHmXw5dB7rIA4BVo7AWY/ROTwnFxneOAVnY4Ik51XvMikHaaY/mWU3Q8PQEhuAH8gLBJUXTwZgHS5dxLWVv8OFMG6equ5Ux/Ho9LtVw27Y7K2YoXKBil6q4wqqyAgy3jktoWLri5eNFWTMIH5KTUl1NUq8DzxAjQaW1EwJWZ6uSHjj0FaqfBiXyiQPOBqqxYbanzwjd3u0YDmSWGD0/SOkiC86LZ5VcH5t/Rie/Pen9L0ApIaDfkrAv0BymhYGrC8Y80fpPHM8NhjXSsHoEfE5uWheNR5c2ZsDE6WaWP3j6ThqS0vGyL01Man9Y7WpXcpTaXaRfvpxSzwXhCXQLLYjwszKuLRNVkf/fNJgXkNuakAz2ojt2GXCeemP4ck/RAzbcm0ySQbj0yraFsn3hEdXAhHxTc/mz0gpimoGFPAHoKweKIU46kKrtqxDCzhgjeqckE7NVE9trQDM00xuC8jOxyhga6ElUToZZv/z0ygFOgPVKEhu17RB4JTojhru7JuYIZQ6leWbXBBjU2r3hbizx94SrLPEC91SDUqEdb33oVr6OSjsOpzkZckIjf9EWOo0qrqA/hpTX5APV7I9sf3OpHTrGHQU2XsMEQa56oxnpyVKzlzF2PJV1WemXtWZnBLk9MuMeSQmS8X4K7RuxZSICaXA7YodFfEkr+gcgLW3KvXHN6hQqtHBo5R3RcF+A8qZt/vmqnsgC92jRt9NcFE8sW3Jqi0POBPi+DorLL5GGB/cfzE2N3Ol+RWTRK1Of5D3OEfahm/uqaRJSIAxju6Ty/yUGdhILFTZK+unQ4OQxlvEPnc+X4dCL1R+1eqQ4LXMvBOiolE1pirtm3cZh2SWZBGR3wo6DefC6lPPafHCw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Future changes will need to add a new member to struct vm_unmapped_area_info. This would cause trouble for any call site that doesn't initialize the struct. Currently every caller sets each member manually, so if new members are added they will be uninitialized and the core code parsing the struct will see garbage in the new member. It could be possible to initialize the new member manually to 0 at each call site. This and a couple other options were discussed, and a working consensus (see links) was that in general the best way to accomplish this would be via static initialization with designated member initiators. Having some struct vm_unmapped_area_info instances not zero initialized will put those sites at risk of feeding garbage into vm_unmapped_area() if the convention is to zero initialize the struct and any new member addition misses a call site that initializes each member manually. It could be possible to leave the code mostly untouched, and just change the line: struct vm_unmapped_area_info info to: struct vm_unmapped_area_info info = {}; However, that would leave cleanup for the members that are manually set to zero, as it would no longer be required. So to be reduce the chance of bugs via uninitialized members, instead simply continue the process to initialize the struct this way tree wide. This will zero any unspecified members. Move the member initializers to the struct declaration when they are known at that time. Leave the members out that were manually initialized to zero, as this would be redundant for designated initializers. Signed-off-by: Rick Edgecombe Reviewed-by: Guo Ren Reviewed-by: Christophe Leroy Cc: Guo Ren Cc: linux-csky@vger.kernel.org Link: https://lore.kernel.org/lkml/202402280912.33AEE7A9CF@keescook/#t Link: https://lore.kernel.org/lkml/j7bfvig3gew3qruouxrh7z7ehjjafrgkbcmg6tcghhfh3rhmzi@wzlcoecgy5rs/ --- v3: - Fixed spelling errors in log - Be consistent about field vs member in log Hi, This patch was split and refactored out of a tree-wide change [0] to just zero-init each struct vm_unmapped_area_info. The overall goal of the series is to help shadow stack guard gaps. Currently, there is only one arch with shadow stacks, but two more are in progress. It is compile tested only. There was further discussion that this method of initializing the structs while nice in some ways has a greater risk of introducing bugs in some of the more complicated callers. Since this version was reviewed my arch maintainers already, leave it as was already acknowledged. Thanks, Rick [0] https://lore.kernel.org/lkml/20240226190951.3240433-6-rick.p.edgecombe@intel.com/ --- arch/csky/abiv1/mmap.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/csky/abiv1/mmap.c b/arch/csky/abiv1/mmap.c index 6792aca49999..7f826331d409 100644 --- a/arch/csky/abiv1/mmap.c +++ b/arch/csky/abiv1/mmap.c @@ -28,7 +28,12 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, struct mm_struct *mm = current->mm; struct vm_area_struct *vma; int do_align = 0; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = { + .length = len, + .low_limit = mm->mmap_base, + .high_limit = TASK_SIZE, + .align_offset = pgoff << PAGE_SHIFT + }; /* * We only need to do colour alignment if either the I or D @@ -61,11 +66,6 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, return addr; } - info.flags = 0; - info.length = len; - info.low_limit = mm->mmap_base; - info.high_limit = TASK_SIZE; info.align_mask = do_align ? (PAGE_MASK & (SHMLBA - 1)) : 0; - info.align_offset = pgoff << PAGE_SHIFT; return vm_unmapped_area(&info); } From patchwork Tue Mar 26 02:16:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 13603262 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A43ACC54E58 for ; Tue, 26 Mar 2024 02:17:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DF2E76B0092; Mon, 25 Mar 2024 22:17:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D78826B0093; Mon, 25 Mar 2024 22:17:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BCB036B0095; Mon, 25 Mar 2024 22:17:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id A74636B0092 for ; Mon, 25 Mar 2024 22:17:24 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 848DE1C0A18 for ; Tue, 26 Mar 2024 02:17:24 +0000 (UTC) X-FDA: 81937578408.05.175FA9A Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by imf02.hostedemail.com (Postfix) with ESMTP id A42EF8000A for ; Tue, 26 Mar 2024 02:17:21 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="n2fEt/wz"; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf02.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711419443; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=KA2yrbdUGYuU2RNX6bqHhmZw2pr2BnHeu/QgVIFDK/s=; b=DWcAP8XsBoBhu0XzfYbSrsT1EJLULntYN0U9R7QWDVeU/7q2xlJp0K44btaGK3ADCS5d/6 b0C/iL/H56nTtB9IxwUC4iVPMsqOn/FqDPz1+1DCzfGDJaycCnzm3NGO5YEdylcbnfTSPu Xk2KQtC5dmFSr5bjNzE0CcLrv0PIehI= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="n2fEt/wz"; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf02.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711419443; a=rsa-sha256; cv=none; b=q38+cxUnsRhpz4/zw27jdUvq5RWjkBGds/93b34BjPV+740ISl1moAiZHFCmI6D1LeXIuB ovqjB6A4UP/Q2Da/LuLRJ2nLZYWFPOlZeANqESQ5bIKXkXErDRSwOEbYnLuOseR+AFh+Pk kuTrS+nJ/1tcX3l+1gS8WJ7i3rgvgyE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711419443; x=1742955443; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1Q2Y9vtgXw1JXW1FsaqsQyB2Shfhn7MnIwpk2ZhDn1g=; b=n2fEt/wzxx8/Tsb3M4/J3VYrfVYF0JNfxNM+NztaxCO4wP5OdQjq1TrW j4crDcXGMkWNvIQzKxfIFbyBOu1lXJvttDtrb/GFcUkRIMuMkgjfIKF4j 0FlmvEEJJen+NnN2GotfRxXesaIzRXC0ApgcD8Nmbms3LafwQyqQfGqgi JeykckMGfRj6jCyWuHo4q2eRsFTWbUnW5tOq5EWJ8lZ/tv7xx8RN8k0s7 l23wqI6J3PtTWx3/mQ+dsHJFqnIqPnjHg08QGAouoXoEqiiLo0xOE6pPz mlTFFf6z/yDwJ/tMdlqoaekKp1As0j7e3OKMZWecnwccpEQAFldP1t6XH g==; X-IronPort-AV: E=McAfee;i="6600,9927,11024"; a="6564303" X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="6564303" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="20489897" Received: from rpwilson-mobl.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.251.11.187]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:16 -0700 From: Rick Edgecombe To: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, christophe.leroy@csgroup.eu, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, tglx@linutronix.de, x86@kernel.org Cc: rick.p.edgecombe@intel.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Helge Deller , "James E.J. Bottomley" , linux-parisc@vger.kernel.org Subject: [PATCH v4 08/14] parisc: Use initializer for struct vm_unmapped_area_info Date: Mon, 25 Mar 2024 19:16:50 -0700 Message-Id: <20240326021656.202649-9-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240326021656.202649-1-rick.p.edgecombe@intel.com> References: <20240326021656.202649-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: A42EF8000A X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: m59bf4gfxm8e83kxu4yqddgpqz44uwan X-HE-Tag: 1711419441-607802 X-HE-Meta: U2FsdGVkX1/VjhimRdToZKVzogW60lC6Wj7QS0+EbTVP4uLSFrzF/JV4T3M+MTtw16bBvNtPYe4qsOXdp3BUXseYDWf3SEbUcQRrRasljTeIUELIKFaqNXFa2oWwGO3GZfXQGsKcIBsiySpk38emYLmRkY7VOgL3FWhRb2LNMayGhOmjwNEORvgK12hOYU+dQVsP4HmgZop+GOAKnsROXn1cA6m+Dn2G/o+VLTmWYmO85/v6CmXyz+lhcVXVECoj/4rOhBtYFBMuZVrxIKXqlSGyF7z1CO8RwBx55XXfEZMjHBd2wZCkelBugxkEPaTnovH62gzhffx4gMoYB43q+b75VJd0fvHhwOyPa76McfF+bFTQRqeC93XkhXXfwR5FPePvx+ejqV+Pd/q98CIczaTkChcIvj8Ja7se7suMpiF+/+r/2JFfSoDQMdeBf9w+yJ86s5cGU8pumZDbNdvFJzRNMbLDsMzS2ecuRjWRGcufCaeumeT/q/PSqnvDiT0M5tw3+yptFGDWkW2ULELcelEVIjadOyhRXkjF8SLZ9FL9gd7Yw+gMmaEYXJMbRt2zH2sDc0/I5hee2wHyyeKxbZSNYgA4KUg4n7jEhhPrWJVc7toFjR0NtC/XGa2RLc0XPxk3w8f57zlnxGinYYlIykm6Ez6IKBN3PpeMEP259WkFCnKSSEyeg+d52e6NT6CUN4DfTrlexQ50byIgYSEzfziV7gUASMwZgGtFNPhfYn7DnqPxDBJldV6hfb1A7Vv2iWnbz9x2WynoidSOGODEZOBUlkeAGH2l4tsTfonj6BI+RBgxrbVpsGPViv2s5R7QN8OI32fmULqU3K/WsGiXUIDaJwhTx6VqNQAQ4EToiAzkRUVhAdES+ZuseIsSMJ/3HiMC+SAwMIBndxQ+fN0emH6id8L02Is6UKD9Jfeb1O8IXDj6Uz0A0Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Future changes will need to add a new member to struct vm_unmapped_area_info. This would cause trouble for any call site that doesn't initialize the struct. Currently every caller sets each member manually, so if new members are added they will be uninitialized and the core code parsing the struct will see garbage in the new member. It could be possible to initialize the new member manually to 0 at each call site. This and a couple other options were discussed, and a working consensus (see links) was that in general the best way to accomplish this would be via static initialization with designated member initiators. Having some struct vm_unmapped_area_info instances not zero initialized will put those sites at risk of feeding garbage into vm_unmapped_area() if the convention is to zero initialize the struct and any new member addition misses a call site that initializes each member manually. It could be possible to leave the code mostly untouched, and just change the line: struct vm_unmapped_area_info info to: struct vm_unmapped_area_info info = {}; However, that would leave cleanup for the members that are manually set to zero, as it would no longer be required. So to be reduce the chance of bugs via uninitialized members, instead simply continue the process to initialize the struct this way tree wide. This will zero any unspecified members. Move the member initializers to the struct declaration when they are known at that time. Leave the members out that were manually initialized to zero, as this would be redundant for designated initializers. Signed-off-by: Rick Edgecombe Reviewed-by: Christophe Leroy Acked-by: Helge Deller Cc: "James E.J. Bottomley" Cc: Helge Deller Cc: linux-parisc@vger.kernel.org Link: https://lore.kernel.org/lkml/202402280912.33AEE7A9CF@keescook/#t Link: https://lore.kernel.org/lkml/j7bfvig3gew3qruouxrh7z7ehjjafrgkbcmg6tcghhfh3rhmzi@wzlcoecgy5rs/ --- v3: - Fixed spelling errors in log - Be consistent about field vs member in log Hi, This patch was split and refactored out of a tree-wide change [0] to just zero-init each struct vm_unmapped_area_info. The overall goal of the series is to help shadow stack guard gaps. Currently, there is only one arch with shadow stacks, but two more are in progress. It is compile tested only. There was further discussion that this method of initializing the structs while nice in some ways has a greater risk of introducing bugs in some of the more complicated callers. Since this version was reviewed my arch maintainers already, leave it as was already acknowledged. Thanks, Rick [0] https://lore.kernel.org/lkml/20240226190951.3240433-6-rick.p.edgecombe@intel.com/ --- arch/parisc/kernel/sys_parisc.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/parisc/kernel/sys_parisc.c b/arch/parisc/kernel/sys_parisc.c index 98af719d5f85..f7722451276e 100644 --- a/arch/parisc/kernel/sys_parisc.c +++ b/arch/parisc/kernel/sys_parisc.c @@ -104,7 +104,9 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp, struct vm_area_struct *vma, *prev; unsigned long filp_pgoff; int do_color_align; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = { + .length = len + }; if (unlikely(len > TASK_SIZE)) return -ENOMEM; @@ -139,7 +141,6 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp, return addr; } - info.length = len; info.align_mask = do_color_align ? (PAGE_MASK & (SHM_COLOUR - 1)) : 0; info.align_offset = shared_align_offset(filp_pgoff, pgoff); @@ -160,7 +161,6 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp, */ } - info.flags = 0; info.low_limit = mm->mmap_base; info.high_limit = mmap_upper_limit(NULL); return vm_unmapped_area(&info); From patchwork Tue Mar 26 02:16:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 13603264 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C819C54E64 for ; Tue, 26 Mar 2024 02:17:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D2BE16B0096; Mon, 25 Mar 2024 22:17:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CB67D6B0098; Mon, 25 Mar 2024 22:17:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9F9006B009A; Mon, 25 Mar 2024 22:17:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 826F66B0098 for ; Mon, 25 Mar 2024 22:17:25 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 1DA2614029C for ; Tue, 26 Mar 2024 02:17:25 +0000 (UTC) X-FDA: 81937578450.05.2A56692 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by imf06.hostedemail.com (Postfix) with ESMTP id 1F9E8180013 for ; Tue, 26 Mar 2024 02:17:22 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=fDo5v7yr; spf=pass (imf06.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711419443; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=AQuO2+LY0NocTj5/TwPtd+fjpYBqmYsEXamkkmk+aWw=; b=EQ1PNBIXxfol3ltEBtj0uBlKz7BAU/1ws4JfqC3iO2BhNztBBVqk3yws/3fMxOfjt5/wo6 VBGdHm7JyT/goSzyuxhDWJNJpKvhigsWa37SbgCoDd7Th90U3Sj1F5aRLJH2g26+66n+ai 2L/cNrQ9lqQ0OcCp5LZnW0tcVacjWH0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711419443; a=rsa-sha256; cv=none; b=MtCO0nGPcG2k+HCoVC6zRLObpUR8WBGqraIPDyf/O8hOkvUDyDSsIbc4QBADI6prcDP92/ zZZ1LLjimrNNlcao52fWqedgJT0uJjolCuZ9L/oBO58Iy9P9wZT8bbbzlnw7+YucPnEiiZ y0fbXffJSgoycuypr4babWLWHoA79wY= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=fDo5v7yr; spf=pass (imf06.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711419443; x=1742955443; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=BjoIXKt5bAU2p5uENJE46EVXE51w6Jj985k5TB+SvjQ=; b=fDo5v7yrPWxGzUR4dNsGTgKg4umlk/mlCBEF8KQQlR6Fj2hKhXqFUPf3 64HBX5rsydtFfaNSgNA+25vBKDKt3cM0BVhp1kVpypRElm2D+dj8auhVN /+sGnW5DvTz1biR6Jg7HhVkzWKCyl5n3FAziq/swRsQ5N+HRPSvZMCjB5 pdOI3so9teb+/7TtujYz+ISBGiQdOjfsRTw9lsng2f7wMLAoqCtnOKtjP ZRNUtnQOeNHqDnOr6AwUbAhaXlTxRv+LktCzC85DdX9snTnNDUqektNQs 26jZVA3ltaH76RsPKaI/O+vPhWFWqHrJDELBaGP5s2Fsr+AEKddfyXgo9 A==; X-IronPort-AV: E=McAfee;i="6600,9927,11024"; a="6564320" X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="6564320" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:17 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="20489901" Received: from rpwilson-mobl.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.251.11.187]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:16 -0700 From: Rick Edgecombe To: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, christophe.leroy@csgroup.eu, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, tglx@linutronix.de, x86@kernel.org Cc: rick.p.edgecombe@intel.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Michael Ellerman , Nicholas Piggin , "Aneesh Kumar K . V" , "Naveen N . Rao" , linuxppc-dev@lists.ozlabs.org Subject: [PATCH v4 09/14] powerpc: Use initializer for struct vm_unmapped_area_info Date: Mon, 25 Mar 2024 19:16:51 -0700 Message-Id: <20240326021656.202649-10-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240326021656.202649-1-rick.p.edgecombe@intel.com> References: <20240326021656.202649-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Stat-Signature: c3z1pi77bw3jk4khxd5kc3ono6dpbkhb X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 1F9E8180013 X-Rspam-User: X-HE-Tag: 1711419442-676713 X-HE-Meta: U2FsdGVkX1+vanDdA8I9b+Psd0+J5A1b7QSuCMuEX9lrf7GLfkNb0Fd3gFxk9f8efNG1Th98BxngCnj/dViaPIHcauiAt+4JHnEDHdXD2f2AzyJhRnHeGJW8tJgcKPZw4gm4L3bnzNwfu1Zb5gKb9nehq9Cj/KFfcNfO9CrzFmyOZb8cq1QCx2bD5mm8TjNPGcXBFbb7vRD6j41fdUElUYBFZ7PhRV7QS1jyi+eOupXYEcI1IbJQ0nK2NXoKT8twycY0WNp1ZWs1kkvMopvtllOXHMFOUc+qNFACxecAevrrPdXvFteqIDw9kRKSgGB2r4chxb62F4oLrc0ujxNcJI4j8/MoayoZGK/AIGU/+th4NbjCupVGA2PQqpY3wZTDdIHOGl9RpFHD56Jzse9NTP1pNCA0pI5KWDPaFma0yIZ314+EczSzRAZWedA0w29U5qYhhG99EInp0C9LR47kHg/AVgurGFAT334k6OQAzAb5xkiBd0sXZw8t85s0PD6vOFx9bgnHT4PcZdVkWaT8BRWstLtLfcE+YdBlN4jcP8ac7v5dhDwTw9WoM3OaD6wtMOWPXcGVgTQAG9iBMnSieFaIhc6wA47SmIPuepswsYjcqd4/2j/hWMTSxhUT/7pmfiZWYb7TZWd2N1XIFMe0Dy4JUu+esslvPwBHRpAOitGf/qq1SRszXUxC8Y30p6DSYxEfJ/r/2YA6n9nok67jEyGp3pWRNP+TltPRJfT43M/uO7B4T2KSgIAiSQYabfSoo3EGQI7eMDm6Y+wuJqUA27/w13G8Cx8B8PVhEqwoDvUez22lnx/5qhf49NpFbK832qlQ94d8aU+WTno34Q1VLwl0hlEW7tP1er97gH2qrDvZZVY4hHJMGPzmFBF14qdBK1CDovF9DcKE6jCnK91OtNvdqp+tV9y9y8OjzowOnwq9EwNrANibWPS1dNpLnnodX+nTYpN0VNrujPzDtMy toQS5b1/ 8u2g5Ad0wLg0xbjG26bOPtq4KhqltWSc8iuFKgtUXnM4mhQJZTvUW6q0uxv0S3VTZK4OH2LmE1+1ciVIgsgukpD7XoEVwL2N0stiMBVejOhZbVeJpZaH9ImLxK3UpBaWePJXVLyK4cZN0/ZbBrUB5A8N9HgcmDd4XYjXKdcZ5DKZLeFdBSBagfpUOjsftF7PsDRId+PiSM7TV4KZQ+esgJbJ3+yTqqwITMHeomICDoS6nixhzlOJkxwVpannEO2IMjAg7YeKySV6mGKH82ENj57Eq+/21IaRX93pWYULsxZ7a/QhwrJ5FTOV6hExAuNZfoaVSRxfdttVG4UIjJutjlhT2fW3p+XgOkvGl/r/qQuzrdrumgzo8RmqtF47tjp6eICGI/PpkxC5Ey3LROxV/4wYbJSndBdUkDIjScCH7GRQQ9X+nuyJV8iGHDw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Future changes will need to add a new member to struct vm_unmapped_area_info. This would cause trouble for any call site that doesn't initialize the struct. Currently every caller sets each member manually, so if new members are added they will be uninitialized and the core code parsing the struct will see garbage in the new member. It could be possible to initialize the new member manually to 0 at each call site. This and a couple other options were discussed, and a working consensus (see links) was that in general the best way to accomplish this would be via static initialization with designated member initiators. Having some struct vm_unmapped_area_info instances not zero initialized will put those sites at risk of feeding garbage into vm_unmapped_area() if the convention is to zero initialize the struct and any new member addition misses a call site that initializes each member manually. It could be possible to leave the code mostly untouched, and just change the line: struct vm_unmapped_area_info info to: struct vm_unmapped_area_info info = {}; However, that would leave cleanup for the members that are manually set to zero, as it would no longer be required. So to be reduce the chance of bugs via uninitialized members, instead simply continue the process to initialize the struct this way tree wide. This will zero any unspecified members. Move the member initializers to the struct declaration when they are known at that time. Leave the members out that were manually initialized to zero, as this would be redundant for designated initializers. Signed-off-by: Rick Edgecombe Acked-by: Michael Ellerman Cc: Michael Ellerman Cc: Nicholas Piggin Cc: Christophe Leroy Cc: Aneesh Kumar K.V Cc: Naveen N. Rao Cc: linuxppc-dev@lists.ozlabs.org Link: https://lore.kernel.org/lkml/202402280912.33AEE7A9CF@keescook/#t Link: https://lore.kernel.org/lkml/j7bfvig3gew3qruouxrh7z7ehjjafrgkbcmg6tcghhfh3rhmzi@wzlcoecgy5rs/ --- v4: - Remove designated zero initialization (Christophe Leroy) v3: - Fixed spelling errors in log - Be consistent about field vs member in log Hi, This patch was split and refactored out of a tree-wide change [0] to just zero-init each struct vm_unmapped_area_info. The overall goal of the series is to help shadow stack guard gaps. Currently, there is only one arch with shadow stacks, but two more are in progress. It is compile tested only. There was further discussion that this method of initializing the structs while nice in some ways has a greater risk of introducing bugs in some of the more complicated callers. Since this version was reviewed my arch maintainers already, leave it as was already acknowledged. Thanks, Rick [0] https://lore.kernel.org/lkml/20240226190951.3240433-6-rick.p.edgecombe@intel.com/ --- arch/powerpc/mm/book3s64/slice.c | 20 +++++++++----------- 1 file changed, 9 insertions(+), 11 deletions(-) diff --git a/arch/powerpc/mm/book3s64/slice.c b/arch/powerpc/mm/book3s64/slice.c index c0b58afb9a47..ef3ce37f1bb3 100644 --- a/arch/powerpc/mm/book3s64/slice.c +++ b/arch/powerpc/mm/book3s64/slice.c @@ -282,12 +282,10 @@ static unsigned long slice_find_area_bottomup(struct mm_struct *mm, { int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT); unsigned long found, next_end; - struct vm_unmapped_area_info info; - - info.flags = 0; - info.length = len; - info.align_mask = PAGE_MASK & ((1ul << pshift) - 1); - info.align_offset = 0; + struct vm_unmapped_area_info info = { + .length = len, + .align_mask = PAGE_MASK & ((1ul << pshift) - 1), + }; /* * Check till the allow max value for this mmap request */ @@ -326,13 +324,13 @@ static unsigned long slice_find_area_topdown(struct mm_struct *mm, { int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT); unsigned long found, prev; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = { + .flags = VM_UNMAPPED_AREA_TOPDOWN, + .length = len, + .align_mask = PAGE_MASK & ((1ul << pshift) - 1), + }; unsigned long min_addr = max(PAGE_SIZE, mmap_min_addr); - info.flags = VM_UNMAPPED_AREA_TOPDOWN; - info.length = len; - info.align_mask = PAGE_MASK & ((1ul << pshift) - 1); - info.align_offset = 0; /* * If we are trying to allocate above DEFAULT_MAP_WINDOW * Add the different to the mmap_base. From patchwork Tue Mar 26 02:16:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 13603265 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 873C3C54E64 for ; Tue, 26 Mar 2024 02:17:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2590C6B0098; Mon, 25 Mar 2024 22:17:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 11F2E6B0099; Mon, 25 Mar 2024 22:17:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EB42C6B009A; Mon, 25 Mar 2024 22:17:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id D1FA96B0098 for ; Mon, 25 Mar 2024 22:17:26 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id A15BB407FF for ; Tue, 26 Mar 2024 02:17:26 +0000 (UTC) X-FDA: 81937578492.26.4143F9D Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by imf02.hostedemail.com (Postfix) with ESMTP id A31318000A for ; Tue, 26 Mar 2024 02:17:24 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=nSxmwPb3; dmarc=pass (policy=none) header.from=intel.com; spf=temperror (imf02.hostedemail.com: error in processing during lookup of rick.p.edgecombe@intel.com: DNS error) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711419445; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=arQrJkYySWMZcnmr5ufHD6uAYO7srq+IWrwp3FsA9YY=; b=0U+jI4IxG0HIIuwowE01PJckSMBcw/ptG+KhpLIQV5OQW84s3jcMToN/y5fc4Ju7SUBtrK LoeSR8s00UneEKjCe5z+CgB0dDV8yPqEdvT2+IJpqpuO7QAK4wYrMjmf4TIBK38S/t6IqC quLuQPcawkoPvKoJ1lwNBPI/TcDB7Co= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=nSxmwPb3; dmarc=pass (policy=none) header.from=intel.com; spf=temperror (imf02.hostedemail.com: error in processing during lookup of rick.p.edgecombe@intel.com: DNS error) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711419445; a=rsa-sha256; cv=none; b=z4zUhmWjCTTV7bQcSzgPXjIuxpFVEUjmIGB1v+oKrnV8WntlhURZE207JvQ4B6mp1C/zCy et3a/YjsaLHDXWAjVYxqeaVJRCQD6q55gRz5GoNutV1GDeuxiEXIK93M40SwETWMqlKmA0 SujOsLSqzV9yEqyi20Fm8RUWBThpIV0= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711419445; x=1742955445; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0XSWLwa7U1N+f/kgoHZwIHzVuDJ8CpV4eXuAoBVCNvI=; b=nSxmwPb39GNjgPoqh/hXfpoGRmIYKoSMNYIh9ZnHCSSIWZRmqurEPmK5 Nq5S3mzD421TNBxbMXetN6Cl0FhtOBFSWpRUIjEjEjVvZbon85OvhrkHv jVdsh7R+wWYKuaEnse9dOXZ3pwZDsCjFQR4LdyEwj1hDqpEhbEeCCvfgA aTMOzw2DUN7stomtTuVBhaFZKe6COj1uUQqzIpV4tSKQ0ImwJ94yNqyGn etz/USHYa3q3gkLbxjhdrrMimjZJSvYw0pOV7SnP4IryoXo9JRfM0Y8l8 X+2gydcrutnDFiWm+vEWiVzb8o6nNSbQrXx5fdleg5RmYuEg516OWblGm Q==; X-IronPort-AV: E=McAfee;i="6600,9927,11024"; a="6564339" X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="6564339" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:17 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="20489906" Received: from rpwilson-mobl.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.251.11.187]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:17 -0700 From: Rick Edgecombe To: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, christophe.leroy@csgroup.eu, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, tglx@linutronix.de, x86@kernel.org Cc: rick.p.edgecombe@intel.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-alpha@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org Subject: [PATCH v4 10/14] treewide: Use initializer for struct vm_unmapped_area_info Date: Mon, 25 Mar 2024 19:16:52 -0700 Message-Id: <20240326021656.202649-11-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240326021656.202649-1-rick.p.edgecombe@intel.com> References: <20240326021656.202649-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: A31318000A X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: xwfyp1cocbqjyp365e5ais1513dyjaun X-HE-Tag: 1711419444-590730 X-HE-Meta: U2FsdGVkX181sLxbk2ZY4EAkBtXSo7LW3vb3bOZ2oQ2/YO9cdbcpwxuW476w66p00ZFlrRrDuDp+UOyGkL73Exj9Xv32LQudL6RNsLJjfG6Wxp3O4PB4R3EEtFgUsXi4wqiEpeQmV8qFes3T+ZpAV9jTw1mVKyXnixiGscwbN+3kV8GqHJwnL4yn1Tp2MeKy4FCIjUfD1y40th//OCfNpYHlxRctJR67BfI0ORlg/Q1XYTJSjGjwjwVYXTei4ZcR7wihAnMoTRZ/ommrR/t1bz4M8KJjqOCK656kl2xDIPl0/HK1taHfrNROoQTblc/bAzWUSIPfB6P8kWUBj5tIAMLTdaFHjoQwT3WzGaOvWRnfv7x5oCC37c9GSSL2dH2rqmfN/rrOBtpuDZ7t8l3vwIm7+hE3Sm/MwxRu42sVGxp9Ioj1/OL2fGhlIk7q7+LGKUTyjJPr5B8u26DCj8jqo8UTNxkJyGTEjcFfLT8MOwTGE7D1H/CZ+AsB2ypXZY6LkImKD/dZM0LqvQ9E6N9eCYhyuNB7iFwV5RJeCPtXeHUhf9WK18X2/2BKSeIY6TDlXYoqhgb6uZ5XVXZrdrt02OzwSrDwfDEMhO4xqGltX9JKBCjw6vjdMNU7o8I5LgkUeoG/DhHeJ8VmJVynBBkAt+LFJ9svms/KGz2kFSWBrzkp7o3W1QVGYmQ3TuQDCnzGK1QS+lV+t6hMkQrIn5lmcPRaEDBUycF3+AIXdL32+7CwKTHBJnfnqRlNUYDOit5UmyMksFgPho4Yfn/F+GDL1VYavqmkpnwBJQNFQEnEtl76YbHg4ajVcs1Skmy3G1e1jlLhf9e3fQAbziegl8xSaIEFYXx3eAvjhiOdcGd3NLKAmsFfbIgADa7BfWQIQHWctCVJQirEv4Y+PPZ1xTND+sXO1wvQJHTgKNMQDkbfuQ5gVToKfOHenwYMG9Eqxa4CQCPGGp/C64DswhQIOJM sEg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Future changes will need to add a new member to struct vm_unmapped_area_info. This would cause trouble for any call site that doesn't initialize the struct. Currently every caller sets each member manually, so if new ones are added they will be uninitialized and the core code parsing the struct will see garbage in the new member. It could be possible to initialize the new member manually to 0 at each call site. This and a couple other options were discussed. Having some struct vm_unmapped_area_info instances not zero initialized will put those sites at risk of feeding garbage into vm_unmapped_area(), if the convention is to zero initialize the struct and any new field addition missed a call site that initializes each field manually. So it is useful to do things similar across the kernel. The consensus (see links) was that in general the best way to accomplish taking into account both code cleanliness and minimizing the chance of introducing bugs, was to do C99 static initialization. As in: struct vm_unmapped_area_info info = {}; With this method of initialization, the whole struct will be zero initialized, and any statements setting fields to zero will be unneeded. The change should not leave cleanup at the call sides. While iterating though the possible solutions a few archs kindly acked other variations that still zero initialized the struct. These sites have been modified in previous changes using the pattern acked by the respective arch. So to be reduce the chance of bugs via uninitialized fields, perform a tree wide change using the consensus for the best general way to do this change. Use C99 static initializing to zero the struct and remove and statements that simply set members to zero. Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Cc: linux-mm@kvack.org Cc: linux-alpha@vger.kernel.org Cc: linux-snps-arc@lists.infradead.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-csky@vger.kernel.org Cc: loongarch@lists.linux.dev Cc: linux-mips@vger.kernel.org Cc: linux-s390@vger.kernel.org Cc: linux-sh@vger.kernel.org Cc: sparclinux@vger.kernel.org Link: https://lore.kernel.org/lkml/202402280912.33AEE7A9CF@keescook/#t Link: https://lore.kernel.org/lkml/j7bfvig3gew3qruouxrh7z7ehjjafrgkbcmg6tcghhfh3rhmzi@wzlcoecgy5rs/ Link: https://lore.kernel.org/lkml/ec3e377a-c0a0-4dd3-9cb9-96517e54d17e@csgroup.eu/ --- v4: - Trivial rebase conflict in s390 Hi archs, For some context, this is part of a larger series to improve shadow stack guard gaps. It involves plumbing a new field via struct vm_unmapped_area_info. The first user is x86, but arm and riscv may likely use it as well. The change is compile tested only for non-x86. Thanks, Rick --- arch/alpha/kernel/osf_sys.c | 5 +---- arch/arc/mm/mmap.c | 4 +--- arch/arm/mm/mmap.c | 5 ++--- arch/loongarch/mm/mmap.c | 3 +-- arch/mips/mm/mmap.c | 3 +-- arch/s390/mm/hugetlbpage.c | 7 ++----- arch/s390/mm/mmap.c | 5 ++--- arch/sh/mm/mmap.c | 5 ++--- arch/sparc/kernel/sys_sparc_32.c | 3 +-- arch/sparc/kernel/sys_sparc_64.c | 5 ++--- arch/sparc/mm/hugetlbpage.c | 7 ++----- arch/x86/kernel/sys_x86_64.c | 7 ++----- arch/x86/mm/hugetlbpage.c | 7 ++----- fs/hugetlbfs/inode.c | 7 ++----- mm/mmap.c | 9 ++------- 15 files changed, 25 insertions(+), 57 deletions(-) diff --git a/arch/alpha/kernel/osf_sys.c b/arch/alpha/kernel/osf_sys.c index 5db88b627439..e5f881bc8288 100644 --- a/arch/alpha/kernel/osf_sys.c +++ b/arch/alpha/kernel/osf_sys.c @@ -1218,14 +1218,11 @@ static unsigned long arch_get_unmapped_area_1(unsigned long addr, unsigned long len, unsigned long limit) { - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; - info.flags = 0; info.length = len; info.low_limit = addr; info.high_limit = limit; - info.align_mask = 0; - info.align_offset = 0; return vm_unmapped_area(&info); } diff --git a/arch/arc/mm/mmap.c b/arch/arc/mm/mmap.c index 3c1c7ae73292..69a915297155 100644 --- a/arch/arc/mm/mmap.c +++ b/arch/arc/mm/mmap.c @@ -27,7 +27,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, { struct mm_struct *mm = current->mm; struct vm_area_struct *vma; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; /* * We enforce the MAP_FIXED case. @@ -51,11 +51,9 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, return addr; } - info.flags = 0; info.length = len; info.low_limit = mm->mmap_base; info.high_limit = TASK_SIZE; - info.align_mask = 0; info.align_offset = pgoff << PAGE_SHIFT; return vm_unmapped_area(&info); } diff --git a/arch/arm/mm/mmap.c b/arch/arm/mm/mmap.c index a0f8a0ca0788..d65d0e6ed10a 100644 --- a/arch/arm/mm/mmap.c +++ b/arch/arm/mm/mmap.c @@ -34,7 +34,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, struct vm_area_struct *vma; int do_align = 0; int aliasing = cache_is_vipt_aliasing(); - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; /* * We only need to do colour alignment if either the I or D @@ -68,7 +68,6 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, return addr; } - info.flags = 0; info.length = len; info.low_limit = mm->mmap_base; info.high_limit = TASK_SIZE; @@ -87,7 +86,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, unsigned long addr = addr0; int do_align = 0; int aliasing = cache_is_vipt_aliasing(); - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; /* * We only need to do colour alignment if either the I or D diff --git a/arch/loongarch/mm/mmap.c b/arch/loongarch/mm/mmap.c index a9630a81b38a..4bbd449b4a47 100644 --- a/arch/loongarch/mm/mmap.c +++ b/arch/loongarch/mm/mmap.c @@ -24,7 +24,7 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp, struct vm_area_struct *vma; unsigned long addr = addr0; int do_color_align; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; if (unlikely(len > TASK_SIZE)) return -ENOMEM; @@ -82,7 +82,6 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp, */ } - info.flags = 0; info.low_limit = mm->mmap_base; info.high_limit = TASK_SIZE; return vm_unmapped_area(&info); diff --git a/arch/mips/mm/mmap.c b/arch/mips/mm/mmap.c index 00fe90c6db3e..7e11d7b58761 100644 --- a/arch/mips/mm/mmap.c +++ b/arch/mips/mm/mmap.c @@ -34,7 +34,7 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp, struct vm_area_struct *vma; unsigned long addr = addr0; int do_color_align; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; if (unlikely(len > TASK_SIZE)) return -ENOMEM; @@ -92,7 +92,6 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp, */ } - info.flags = 0; info.low_limit = mm->mmap_base; info.high_limit = TASK_SIZE; return vm_unmapped_area(&info); diff --git a/arch/s390/mm/hugetlbpage.c b/arch/s390/mm/hugetlbpage.c index 219d906fe830..46de7a4c0309 100644 --- a/arch/s390/mm/hugetlbpage.c +++ b/arch/s390/mm/hugetlbpage.c @@ -258,14 +258,12 @@ static unsigned long hugetlb_get_unmapped_area_bottomup(struct file *file, unsigned long pgoff, unsigned long flags) { struct hstate *h = hstate_file(file); - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; - info.flags = 0; info.length = len; info.low_limit = current->mm->mmap_base; info.high_limit = TASK_SIZE; info.align_mask = PAGE_MASK & ~huge_page_mask(h); - info.align_offset = 0; return vm_unmapped_area(&info); } @@ -274,7 +272,7 @@ static unsigned long hugetlb_get_unmapped_area_topdown(struct file *file, unsigned long pgoff, unsigned long flags) { struct hstate *h = hstate_file(file); - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; unsigned long addr; info.flags = VM_UNMAPPED_AREA_TOPDOWN; @@ -282,7 +280,6 @@ static unsigned long hugetlb_get_unmapped_area_topdown(struct file *file, info.low_limit = PAGE_SIZE; info.high_limit = current->mm->mmap_base; info.align_mask = PAGE_MASK & ~huge_page_mask(h); - info.align_offset = 0; addr = vm_unmapped_area(&info); /* diff --git a/arch/s390/mm/mmap.c b/arch/s390/mm/mmap.c index 6b2e4436ad4a..206756946589 100644 --- a/arch/s390/mm/mmap.c +++ b/arch/s390/mm/mmap.c @@ -86,7 +86,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, { struct mm_struct *mm = current->mm; struct vm_area_struct *vma; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; if (len > TASK_SIZE - mmap_min_addr) return -ENOMEM; @@ -102,7 +102,6 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, goto check_asce_limit; } - info.flags = 0; info.length = len; info.low_limit = mm->mmap_base; info.high_limit = TASK_SIZE; @@ -122,7 +121,7 @@ unsigned long arch_get_unmapped_area_topdown(struct file *filp, unsigned long ad { struct vm_area_struct *vma; struct mm_struct *mm = current->mm; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; /* requested length too big for entire address space */ if (len > TASK_SIZE - mmap_min_addr) diff --git a/arch/sh/mm/mmap.c b/arch/sh/mm/mmap.c index b82199878b45..bee329d4149a 100644 --- a/arch/sh/mm/mmap.c +++ b/arch/sh/mm/mmap.c @@ -57,7 +57,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, struct mm_struct *mm = current->mm; struct vm_area_struct *vma; int do_colour_align; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; if (flags & MAP_FIXED) { /* We do not accept a shared mapping if it would violate @@ -88,7 +88,6 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, return addr; } - info.flags = 0; info.length = len; info.low_limit = TASK_UNMAPPED_BASE; info.high_limit = TASK_SIZE; @@ -106,7 +105,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, struct mm_struct *mm = current->mm; unsigned long addr = addr0; int do_colour_align; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; if (flags & MAP_FIXED) { /* We do not accept a shared mapping if it would violate diff --git a/arch/sparc/kernel/sys_sparc_32.c b/arch/sparc/kernel/sys_sparc_32.c index 082a551897ed..08a19727795c 100644 --- a/arch/sparc/kernel/sys_sparc_32.c +++ b/arch/sparc/kernel/sys_sparc_32.c @@ -41,7 +41,7 @@ SYSCALL_DEFINE0(getpagesize) unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) { - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; if (flags & MAP_FIXED) { /* We do not accept a shared mapping if it would violate @@ -59,7 +59,6 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsi if (!addr) addr = TASK_UNMAPPED_BASE; - info.flags = 0; info.length = len; info.low_limit = addr; info.high_limit = TASK_SIZE; diff --git a/arch/sparc/kernel/sys_sparc_64.c b/arch/sparc/kernel/sys_sparc_64.c index 1dbf7211666e..d9c3b34ca744 100644 --- a/arch/sparc/kernel/sys_sparc_64.c +++ b/arch/sparc/kernel/sys_sparc_64.c @@ -93,7 +93,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsi struct vm_area_struct * vma; unsigned long task_size = TASK_SIZE; int do_color_align; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; if (flags & MAP_FIXED) { /* We do not accept a shared mapping if it would violate @@ -126,7 +126,6 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsi return addr; } - info.flags = 0; info.length = len; info.low_limit = TASK_UNMAPPED_BASE; info.high_limit = min(task_size, VA_EXCLUDE_START); @@ -154,7 +153,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, unsigned long task_size = STACK_TOP32; unsigned long addr = addr0; int do_color_align; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; /* This should only ever run for 32-bit processes. */ BUG_ON(!test_thread_flag(TIF_32BIT)); diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm/hugetlbpage.c index 38a1bef47efb..4caf56b32e26 100644 --- a/arch/sparc/mm/hugetlbpage.c +++ b/arch/sparc/mm/hugetlbpage.c @@ -31,17 +31,15 @@ static unsigned long hugetlb_get_unmapped_area_bottomup(struct file *filp, { struct hstate *h = hstate_file(filp); unsigned long task_size = TASK_SIZE; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; if (test_thread_flag(TIF_32BIT)) task_size = STACK_TOP32; - info.flags = 0; info.length = len; info.low_limit = TASK_UNMAPPED_BASE; info.high_limit = min(task_size, VA_EXCLUDE_START); info.align_mask = PAGE_MASK & ~huge_page_mask(h); - info.align_offset = 0; addr = vm_unmapped_area(&info); if ((addr & ~PAGE_MASK) && task_size > VA_EXCLUDE_END) { @@ -63,7 +61,7 @@ hugetlb_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, struct hstate *h = hstate_file(filp); struct mm_struct *mm = current->mm; unsigned long addr = addr0; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; /* This should only ever run for 32-bit processes. */ BUG_ON(!test_thread_flag(TIF_32BIT)); @@ -73,7 +71,6 @@ hugetlb_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, info.low_limit = PAGE_SIZE; info.high_limit = mm->mmap_base; info.align_mask = PAGE_MASK & ~huge_page_mask(h); - info.align_offset = 0; addr = vm_unmapped_area(&info); /* diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c index cb9fa1d5c66f..96b9d29aead0 100644 --- a/arch/x86/kernel/sys_x86_64.c +++ b/arch/x86/kernel/sys_x86_64.c @@ -118,7 +118,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, { struct mm_struct *mm = current->mm; struct vm_area_struct *vma; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; unsigned long begin, end; if (flags & MAP_FIXED) @@ -137,11 +137,9 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, return addr; } - info.flags = 0; info.length = len; info.low_limit = begin; info.high_limit = end; - info.align_mask = 0; info.align_offset = pgoff << PAGE_SHIFT; if (filp) { info.align_mask = get_align_mask(); @@ -158,7 +156,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, struct vm_area_struct *vma; struct mm_struct *mm = current->mm; unsigned long addr = addr0; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; /* requested length too big for entire address space */ if (len > TASK_SIZE) @@ -203,7 +201,6 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, if (addr > DEFAULT_MAP_WINDOW && !in_32bit_syscall()) info.high_limit += TASK_SIZE_MAX - DEFAULT_MAP_WINDOW; - info.align_mask = 0; info.align_offset = pgoff << PAGE_SHIFT; if (filp) { info.align_mask = get_align_mask(); diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c index 6d77c0039617..fb600949a355 100644 --- a/arch/x86/mm/hugetlbpage.c +++ b/arch/x86/mm/hugetlbpage.c @@ -51,9 +51,8 @@ static unsigned long hugetlb_get_unmapped_area_bottomup(struct file *file, unsigned long pgoff, unsigned long flags) { struct hstate *h = hstate_file(file); - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; - info.flags = 0; info.length = len; info.low_limit = get_mmap_base(1); @@ -65,7 +64,6 @@ static unsigned long hugetlb_get_unmapped_area_bottomup(struct file *file, task_size_32bit() : task_size_64bit(addr > DEFAULT_MAP_WINDOW); info.align_mask = PAGE_MASK & ~huge_page_mask(h); - info.align_offset = 0; return vm_unmapped_area(&info); } @@ -74,7 +72,7 @@ static unsigned long hugetlb_get_unmapped_area_topdown(struct file *file, unsigned long pgoff, unsigned long flags) { struct hstate *h = hstate_file(file); - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; info.flags = VM_UNMAPPED_AREA_TOPDOWN; info.length = len; @@ -89,7 +87,6 @@ static unsigned long hugetlb_get_unmapped_area_topdown(struct file *file, info.high_limit += TASK_SIZE_MAX - DEFAULT_MAP_WINDOW; info.align_mask = PAGE_MASK & ~huge_page_mask(h); - info.align_offset = 0; addr = vm_unmapped_area(&info); /* diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index 3dee18bf47ed..2f4e88552d3f 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -176,14 +176,12 @@ hugetlb_get_unmapped_area_bottomup(struct file *file, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) { struct hstate *h = hstate_file(file); - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; - info.flags = 0; info.length = len; info.low_limit = current->mm->mmap_base; info.high_limit = arch_get_mmap_end(addr, len, flags); info.align_mask = PAGE_MASK & ~huge_page_mask(h); - info.align_offset = 0; return vm_unmapped_area(&info); } @@ -192,14 +190,13 @@ hugetlb_get_unmapped_area_topdown(struct file *file, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) { struct hstate *h = hstate_file(file); - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; info.flags = VM_UNMAPPED_AREA_TOPDOWN; info.length = len; info.low_limit = PAGE_SIZE; info.high_limit = arch_get_mmap_base(addr, current->mm->mmap_base); info.align_mask = PAGE_MASK & ~huge_page_mask(h); - info.align_offset = 0; addr = vm_unmapped_area(&info); /* diff --git a/mm/mmap.c b/mm/mmap.c index f734e4fa6d94..609c087bba8e 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1705,7 +1705,7 @@ generic_get_unmapped_area(struct file *filp, unsigned long addr, { struct mm_struct *mm = current->mm; struct vm_area_struct *vma, *prev; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; const unsigned long mmap_end = arch_get_mmap_end(addr, len, flags); if (len > mmap_end - mmap_min_addr) @@ -1723,12 +1723,9 @@ generic_get_unmapped_area(struct file *filp, unsigned long addr, return addr; } - info.flags = 0; info.length = len; info.low_limit = mm->mmap_base; info.high_limit = mmap_end; - info.align_mask = 0; - info.align_offset = 0; return vm_unmapped_area(&info); } @@ -1753,7 +1750,7 @@ generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr, { struct vm_area_struct *vma, *prev; struct mm_struct *mm = current->mm; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = {}; const unsigned long mmap_end = arch_get_mmap_end(addr, len, flags); /* requested length too big for entire address space */ @@ -1777,8 +1774,6 @@ generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr, info.length = len; info.low_limit = PAGE_SIZE; info.high_limit = arch_get_mmap_base(addr, mm->mmap_base); - info.align_mask = 0; - info.align_offset = 0; addr = vm_unmapped_area(&info); /* From patchwork Tue Mar 26 02:16:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 13603266 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84A5CC54E64 for ; Tue, 26 Mar 2024 02:17:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7FE006B009A; Mon, 25 Mar 2024 22:17:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 789776B009B; Mon, 25 Mar 2024 22:17:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4ED176B009C; Mon, 25 Mar 2024 22:17:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 2071D6B009A for ; Mon, 25 Mar 2024 22:17:27 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id ED555140260 for ; Tue, 26 Mar 2024 02:17:26 +0000 (UTC) X-FDA: 81937578492.23.668F988 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by imf11.hostedemail.com (Postfix) with ESMTP id 1A7E740004 for ; Tue, 26 Mar 2024 02:17:24 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=ETk6m2J4; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf11.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711419445; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=MRqCejIsFURLPJVKLnwglOlunoBTea4Nf4fl29uF8X4=; b=eoAX5B4sYF0pvnKJ6AuAnRxGmvK9xiuxI3hAvwT5cAtN69eAalbHx4blxVqzFhB3qf14BB kHR5jUEYPT6ILsK1cZ+9e+TdacR9bh8WS4/g2gMOk+AFhCVNDjJ6LIeRqcxQ6o1oob1vcs eYtUjIC++QY2Yo8lGrbDHy5ILk5VBiw= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=ETk6m2J4; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf11.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711419445; a=rsa-sha256; cv=none; b=5cKEh5X6aOnZ0UHvK+bTE9uwBnWJ1iGYoeWLLRSjDX6F1FqNAvAJQiWAuC4okxNeH3gwm4 aD/nkaT22g+wYnV0CJLXEcJhFc2ilKXLKv5P4RmrPCoagIcxf4bePXhEW7YShznMEnaBld Eqw26ZMlsAaLI7mKoNldYyU5xYxSvcE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711419445; x=1742955445; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=sxOpaISc6XsCBcTnaTun5GkWeCaUbW+4JANtdWXac9M=; b=ETk6m2J4GpnzWV5nizrWiKss6f6vTKFVdcy9TBlglh45FMosZRFn6EjP BIxIlXdZcOewqb15Qh3wogt7TZK+E+5J9I1EHJPy4NLNU4Xr98/JnTuIg F2dxyJKWly2jwFB4MnquHHmhMqC6zSUFi/Jli6mRhiz2RlcwwxlMjfSjt eV6xe9w5Cl0GwioSYZjfuAdue8XFsYCyDHhHuzefxmPxuZLc2xxwquGk+ qhFJMEsAlc04KW7e6+5q58ZOssjivFf9ibEMZbSSGFkUtHfCeewF+LsTk +vUcdENqIi9YMak817WKTbmZIJRVQWFcoKNEo53cK1yN+lpQYbezCmV5B A==; X-IronPort-AV: E=McAfee;i="6600,9927,11024"; a="6564353" X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="6564353" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:17 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="20489909" Received: from rpwilson-mobl.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.251.11.187]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:17 -0700 From: Rick Edgecombe To: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, christophe.leroy@csgroup.eu, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, tglx@linutronix.de, x86@kernel.org Cc: rick.p.edgecombe@intel.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v4 11/14] mm: Take placement mappings gap into account Date: Mon, 25 Mar 2024 19:16:53 -0700 Message-Id: <20240326021656.202649-12-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240326021656.202649-1-rick.p.edgecombe@intel.com> References: <20240326021656.202649-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 1A7E740004 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: rgsi9wjecqosw7qeakf7e7jz7rymb1jt X-HE-Tag: 1711419444-124250 X-HE-Meta: U2FsdGVkX18l+nmAKCHjtmJdJprPgFfSja9XuCr6wLolZQN/5qS1ucwABHB98/52SxkFMzkkCaIMIUpK9xzCJcYvKtSFVVio2qhZow/LVYI7r/490GSEpyYLS2pKbsSCFxc4ZAZUb+eeNBqjpwkvnbmFzyMBEgY4/DOni45rdTIewRgyNORxzJRquWbfZKVQSaFcRF/ReK3nP2hKhNhqPMgrNrrycVoEOvEMYbWD8DzGWg5b6Bzn9SeE9vfOLq2D/fZ7/z6sZAdpE2pr4ORBWjfMW4+X/WlK3MD8smUpvCJaiNgQP3aLekn4+6wM0hqGULanXY6Jn28ZAEIUSP+nPcEbs9qsWsT43dfU3nDTDa4E2c04eHq6VMdBqIw0JNY9QAoA52wJn0qOCY9Zlz8pzPPr0XM0n78BtH2wSUXgcB9uWJkYTcDzvH3fZ4OtmBmTghTHG8KkLWn329216nTphynWISJq7s13g5a17EiRIrXeloQMhrHD8yU0hTKLieHJOAc8F8fPFEMRv3Jdk8j1AQjI1WATNEJoPkfwjLPOL21DQhXcCI8AwVIlERMzCpx/2hMW7KdjG/PS4q6TuXiHtTM6zt8Ui+CaTNy31odZyHEsv/L1nbGSZu9kS3MEjvws+r5iP9WtDGvRN+Yq4Kb0B7i4k72r4S6QIV02xFrSmZ+tDKCKbjVjiscKaPCroLmICJUhK4CLwtUpOm80Cx6FVwdgKg9LoXpz7j9rtrSjPmK6N3HyuzuZqPkuo7yxT/FB6rHIyZAzyNgYdiFhI2VdFAPMdKXlTpsweHonEzbMQHyaOKqszSBTRS4Ph0cmLKjONRp//475CtLty/tZuCT8A1O0WwdvyZNw X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When memory is being placed, mmap() will take care to respect the guard gaps of certain types of memory (VM_SHADOWSTACK, VM_GROWSUP and VM_GROWSDOWN). In order to ensure guard gaps between mappings, mmap() needs to consider two things: 1. That the new mapping isn’t placed in an any existing mappings guard gaps. 2. That the new mapping isn’t placed such that any existing mappings are not in *its* guard gaps. The long standing behavior of mmap() is to ensure 1, but not take any care around 2. So for example, if there is a PAGE_SIZE free area, and a mmap() with a PAGE_SIZE size, and a type that has a guard gap is being placed, mmap() may place the shadow stack in the PAGE_SIZE free area. Then the mapping that is supposed to have a guard gap will not have a gap to the adjacent VMA. For MAP_GROWSDOWN/VM_GROWSDOWN and MAP_GROWSUP/VM_GROWSUP this has not been a problem in practice because applications place these kinds of mappings very early, when there is not many mappings to find a space between. But for shadow stacks, they may be placed throughout the lifetime of the application. Use the start_gap field to find a space that includes the guard gap for the new mapping. Take care to not interfere with the alignment. Signed-off-by: Rick Edgecombe Reviewed-by: Christophe Leroy --- v3: - Spelling fix in comment v2: - Remove VM_UNMAPPED_START_GAP_SET and have struct vm_unmapped_area_info initialized with zeros (in another patch). (Kirill) - Drop unrelated space change (Kirill) - Add comment around interactions of alignment and start gap step (Kirill) --- include/linux/mm.h | 1 + mm/mmap.c | 12 +++++++++--- 2 files changed, 10 insertions(+), 3 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 8b13cd891b53..5c7f75edfde1 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3438,6 +3438,7 @@ struct vm_unmapped_area_info { unsigned long high_limit; unsigned long align_mask; unsigned long align_offset; + unsigned long start_gap; }; extern unsigned long vm_unmapped_area(struct vm_unmapped_area_info *info); diff --git a/mm/mmap.c b/mm/mmap.c index 609c087bba8e..2d9e7a999774 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1580,7 +1580,7 @@ static unsigned long unmapped_area(struct vm_unmapped_area_info *info) MA_STATE(mas, ¤t->mm->mm_mt, 0, 0); /* Adjust search length to account for worst case alignment overhead */ - length = info->length + info->align_mask; + length = info->length + info->align_mask + info->start_gap; if (length < info->length) return -ENOMEM; @@ -1592,7 +1592,13 @@ static unsigned long unmapped_area(struct vm_unmapped_area_info *info) if (mas_empty_area(&mas, low_limit, high_limit - 1, length)) return -ENOMEM; - gap = mas.index; + /* + * Adjust for the gap first so it doesn't interfere with the + * later alignment. The first step is the minimum needed to + * fulill the start gap, the next steps is the minimum to align + * that. It is the minimum needed to fulill both. + */ + gap = mas.index + info->start_gap; gap += (info->align_offset - gap) & info->align_mask; tmp = mas_next(&mas, ULONG_MAX); if (tmp && (tmp->vm_flags & VM_STARTGAP_FLAGS)) { /* Avoid prev check if possible */ @@ -1631,7 +1637,7 @@ static unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info) MA_STATE(mas, ¤t->mm->mm_mt, 0, 0); /* Adjust search length to account for worst case alignment overhead */ - length = info->length + info->align_mask; + length = info->length + info->align_mask + info->start_gap; if (length < info->length) return -ENOMEM; From patchwork Tue Mar 26 02:16:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 13603267 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90F9EC54E58 for ; Tue, 26 Mar 2024 02:17:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B807D6B0099; Mon, 25 Mar 2024 22:17:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B0CE96B009B; Mon, 25 Mar 2024 22:17:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8E6BF6B009D; Mon, 25 Mar 2024 22:17:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 6D45C6B0099 for ; Mon, 25 Mar 2024 22:17:27 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 45B358019B for ; Tue, 26 Mar 2024 02:17:27 +0000 (UTC) X-FDA: 81937578534.23.49BA7A2 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by imf06.hostedemail.com (Postfix) with ESMTP id 48D4D180005 for ; Tue, 26 Mar 2024 02:17:25 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=jTIIXDg5; spf=pass (imf06.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711419445; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=PaLSyVVvKykie/0xoWH+rnIzylulthbSti8320sfefU=; b=MhqrlO51WkkqwipczYJ/YzbqQzRdVWeHQQ855wAiMX/PQCvMxwPa4fn0jr6kR8ChgWEnZn 33FimeLVpCKeAWsbZpkbhnIMUkgpGlpL2bKj2UlFlbgx7T71UvakIBT4Ow1If6K/VDOLXm QuN6G6r/eS34g01lI98hpiaNjjDx/PQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711419445; a=rsa-sha256; cv=none; b=f2afcQbJnnmoNtO87B7uo3PB+MFiMVE1yTtoG6aJatxtrM0fROhMHyTjEXF4f5YZiLBOja KihrZrtPK8Uq61+jv2mpSg9NqToxoRl9zgBDDl3SQ7TWctN+24pFHukiMP2GOgJ3dsikig u7vIHpuyYeMwfkGNjK+5TzbTgbDnXw8= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=jTIIXDg5; spf=pass (imf06.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711419445; x=1742955445; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=uO697Q/RqZjisoLPzX7dc/TQTP+f7eWP95gfUGD1wX8=; b=jTIIXDg54ApJBYIsY6t1OAQYrrkaPE6z/QOEPErXfpKZWLSq3B/SiLOX zJd/MblbEfYbFQ/msGCW9+37xI5vdBvbyXfAykvPhkZtWxAOqc0mtIVA5 mq7lWY1UX6dV3lS4mtfvwjs1OEGU/4MiauBNsz4k6K19XAhsfIbIr04+l SfufTXe7mFtcDABeABjteljKRn5RZKPoN7lsu3P1VVpdFTvLak6CDMYyZ EB0jkkHTccgwWj+O8TShiags5/qavW/GhOG0slwRsmmVc1iBWtWvrayKb KJjKQe6NiGPFYV42BB8KidlBs92o3x7jLFdB/avUusLSKI5AtMqIILkKP g==; X-IronPort-AV: E=McAfee;i="6600,9927,11024"; a="6564367" X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="6564367" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:18 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="20489913" Received: from rpwilson-mobl.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.251.11.187]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:17 -0700 From: Rick Edgecombe To: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, christophe.leroy@csgroup.eu, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, tglx@linutronix.de, x86@kernel.org Cc: rick.p.edgecombe@intel.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v4 12/14] x86/mm: Implement HAVE_ARCH_UNMAPPED_AREA_VMFLAGS Date: Mon, 25 Mar 2024 19:16:54 -0700 Message-Id: <20240326021656.202649-13-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240326021656.202649-1-rick.p.edgecombe@intel.com> References: <20240326021656.202649-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Stat-Signature: xhfuznjyug63qkwe6k4fdem1371fwj4m X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 48D4D180005 X-Rspam-User: X-HE-Tag: 1711419445-377802 X-HE-Meta: U2FsdGVkX1/7J/yjt0uxeQ+KypJtDqxE5pYimAZzi4l5kbjzW5qMAGLMpfR/i1Ltg2Hy5hwnGAQ6D04rAbxrsBe5ZlHB8S8szDkniPHrK0eI9/3BkYsu8RF9wj1ncg6oEDE2c8Q54KcUELcebBBHbN2yQLtlIoujADJ0eae5tNjXWbuoCJFAIbNpa1Vc/TXMk52cRgzibHRpxDc0FiwF5uOoY5QnSMKJHo/fDKlIOSdrZDOKhKGY9lnVvv5/MQpYVbt6Bg8gK2+uZQp+mxU/2z7mUuFYo6ZKV1ndaW3f3HOUtCZs+PGVKkq0X84SONjJBA0x/ueFIp1Yo3Zgn1SSNj28AkXouig0EcXFHPrpSiA2zBhpi3L8of6LEuQpx2dO70XpsqCs4VJQoT+OhvzLWet1DiHpb41/MtwfmR4VEYzuUKpBtiNVbcM/JCW+BQbT7seY/SChZChjS4nDwO7beNRcgs13ogQ2wPsF3u0NfuXgySFg6hDNCqdpYf3iaAAEQTABLdRcy/wlQ3d/M20zb0A+jidZ8avMtSbE2AA/g6Eyl7drk9j+besIhzqGDGrcNu89xpNjCUpspaFj7hDO9hkB22cs3DvySNDXDjd9LTSpyLepuqO9TEIMw9OqbV9neiTlUi0zmuQUVg4E9KtWL1XdasV/JYo9/Q/Ngk+/fBpikkehFrG8Lbq9NlwOGEB2f3gw/fbMryOYjXnHcpwsXaz6TF/tJoW8W4sgVoolwkAmWcG3JG98Tr/VG0bjoKcXgECJ3hjYVpiMbi1qZUxgi4UeYfNcZRGG6oldDWZyKU6hhwBq4SR4p1J0qpsWYMB7+MXJu5tDhUOIsX9mvXS3HkfCSeLLyzu/KyNpkIxaZUQ+gSJ6kqc+oZcmuIIbcgVsemGBLtV4gkiVXRuo8L0kZvLLQmgbNTAaMG0jv67C17fRh6SHTELqxfnKgBrSaeG0yK5dwxGEo5d89sCOeZQ 6RYoE3K+ fDmPoDEt9SBQSVO2zPK2uleXx0iNrYCUntrY1SmfJg8TJMqzTFURNDChDk0OGcxz2RcxxmL6z23S5GjwYNXWReD1BB7fm30Bv+1fu8bbNcue0tiqSGtAj48ICKsSRi/EBaZ7zdlK9SCFiqjsCmV5448VbzSc4PbAic0drVx01DUKe9Jnc2Fs47P29whYEf0lUktH/WFBqzP1T8cQYRVjED95/LLK8H+spin0v2IIyKCBlyMNNYjP+9NJrwg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When memory is being placed, mmap() will take care to respect the guard gaps of certain types of memory (VM_SHADOWSTACK, VM_GROWSUP and VM_GROWSDOWN). In order to ensure guard gaps between mappings, mmap() needs to consider two things: 1. That the new mapping isn’t placed in an any existing mappings guard gaps. 2. That the new mapping isn’t placed such that any existing mappings are not in *its* guard gaps. The long standing behavior of mmap() is to ensure 1, but not take any care around 2. So for example, if there is a PAGE_SIZE free area, and a mmap() with a PAGE_SIZE size, and a type that has a guard gap is being placed, mmap() may place the shadow stack in the PAGE_SIZE free area. Then the mapping that is supposed to have a guard gap will not have a gap to the adjacent VMA. Add x86 arch implementations of arch_get_unmapped_area_vmflags/_topdown() so future changes can allow the guard gap of type of vma being placed to be taken into account. This will be used for shadow stack memory. Signed-off-by: Rick Edgecombe --- v3: - Commit log grammar v2: - Remove unnecessary added extern --- arch/x86/include/asm/pgtable_64.h | 1 + arch/x86/kernel/sys_x86_64.c | 25 ++++++++++++++++++++----- 2 files changed, 21 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h index 7e9db77231ac..3c4407271d08 100644 --- a/arch/x86/include/asm/pgtable_64.h +++ b/arch/x86/include/asm/pgtable_64.h @@ -245,6 +245,7 @@ extern void cleanup_highmap(void); #define HAVE_ARCH_UNMAPPED_AREA #define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN +#define HAVE_ARCH_UNMAPPED_AREA_VMFLAGS #define PAGE_AGP PAGE_KERNEL_NOCACHE #define HAVE_PAGE_AGP 1 diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c index 96b9d29aead0..75966afb6251 100644 --- a/arch/x86/kernel/sys_x86_64.c +++ b/arch/x86/kernel/sys_x86_64.c @@ -113,8 +113,8 @@ static void find_start_end(unsigned long addr, unsigned long flags, } unsigned long -arch_get_unmapped_area(struct file *filp, unsigned long addr, - unsigned long len, unsigned long pgoff, unsigned long flags) +arch_get_unmapped_area_vmflags(struct file *filp, unsigned long addr, unsigned long len, + unsigned long pgoff, unsigned long flags, vm_flags_t vm_flags) { struct mm_struct *mm = current->mm; struct vm_area_struct *vma; @@ -149,9 +149,9 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, } unsigned long -arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, - const unsigned long len, const unsigned long pgoff, - const unsigned long flags) +arch_get_unmapped_area_topdown_vmflags(struct file *filp, unsigned long addr0, + unsigned long len, unsigned long pgoff, + unsigned long flags, vm_flags_t vm_flags) { struct vm_area_struct *vma; struct mm_struct *mm = current->mm; @@ -220,3 +220,18 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, */ return arch_get_unmapped_area(filp, addr0, len, pgoff, flags); } + +unsigned long +arch_get_unmapped_area(struct file *filp, unsigned long addr, + unsigned long len, unsigned long pgoff, unsigned long flags) +{ + return arch_get_unmapped_area_vmflags(filp, addr, len, pgoff, flags, 0); +} + +unsigned long +arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr, + const unsigned long len, const unsigned long pgoff, + const unsigned long flags) +{ + return arch_get_unmapped_area_topdown_vmflags(filp, addr, len, pgoff, flags, 0); +} From patchwork Tue Mar 26 02:16:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 13603269 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35CBAC54E58 for ; Tue, 26 Mar 2024 02:18:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7B0F66B009D; Mon, 25 Mar 2024 22:17:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 73A9F6B009E; Mon, 25 Mar 2024 22:17:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4EE1F6B009F; Mon, 25 Mar 2024 22:17:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 351F36B009E for ; Mon, 25 Mar 2024 22:17:31 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id BDAB5802DD for ; Tue, 26 Mar 2024 02:17:30 +0000 (UTC) X-FDA: 81937578660.07.400B797 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by imf06.hostedemail.com (Postfix) with ESMTP id A88CE18000F for ; Tue, 26 Mar 2024 02:17:28 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="iw7ggv/X"; spf=pass (imf06.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711419449; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dO6RPFI99vr7uz3j2jX2hgFYSAqsHgrD7fWTjzdlt20=; b=YuTbikloMtbTu5+TXXv31BKa+AR9Y/TObfcih3r40U0YkqsvNocGejAZhc4xxkiPUjiK8N +6WOHosAxmkYYJetDL+uacQQQnjx3i6N29+35A9qNmxb96k+Vg7OI78iCvkAehk6OpOBcD m29R1HHa+X2CYyFmpbBIVKN/L8LKanA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711419449; a=rsa-sha256; cv=none; b=FsjJfSkSdljmUjVrOEw0MMqFyRNt0//QXNPsXH45fp64M2tzQtmicm6/8i0lek4wpwnfew snmwySeT6gsYaVqLKg4HSy8NRq6K+23E6YwaPUf+IKx4Sjfbz5eW/1m6x+S2zasHQkBnlY mCaTm6OXK8ULpXyLDTm3RQfajCNyWEs= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="iw7ggv/X"; spf=pass (imf06.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711419449; x=1742955449; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mZPCbr0rDs8wcMO8G+Ur+xnvhS6uTOIeQkjfiLGhRhI=; b=iw7ggv/X3N05q2pfYNF1JFfKAjWUbQiHg8MOZ7T+C5FX8VrWhFOAwb8o jz3n7eUHrzBsPVE+EKX7NjgQuSzxA7tUyigCh7dfKEJtETXE0aWeakn1H ox44+1MOjbzeGOPl5UzzTMsLuJv59flgRkByl0gSwSir5gAQdQOrMnIu6 Z+RgOYYA5OOjeUlFXGCxDwNlxVDsbxYRr7ujnmRjsrXUNoH30rGw7MHR3 SVAZ+Gcx3M9fC9jFXYmzJbDcQGZ9LXLRWe4hw6C1/syWJeVZlLyXjPw2R AK1eb0/srWyTLr9LPBQWZRrpoSC+asBXS8GO01Y1otEqj562ymxVfk+gD A==; X-IronPort-AV: E=McAfee;i="6600,9927,11024"; a="6564381" X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="6564381" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:18 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="20489917" Received: from rpwilson-mobl.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.251.11.187]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:18 -0700 From: Rick Edgecombe To: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, christophe.leroy@csgroup.eu, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, tglx@linutronix.de, x86@kernel.org Cc: rick.p.edgecombe@intel.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v4 13/14] x86/mm: Care about shadow stack guard gap during placement Date: Mon, 25 Mar 2024 19:16:55 -0700 Message-Id: <20240326021656.202649-14-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240326021656.202649-1-rick.p.edgecombe@intel.com> References: <20240326021656.202649-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Stat-Signature: hau6pcsx4go4cg59xzcjnfinbxagz513 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: A88CE18000F X-Rspam-User: X-HE-Tag: 1711419448-343328 X-HE-Meta: U2FsdGVkX1+Fb+On68cjL5J9nv2+lC5D2QJGX4C9UUMOMNofjXGDqcXkSy9s5N7UPlkw8jE7ENH+8FX9ozjTLyF6/DgXxyIAMwzEIhnp6psRaQV9mOJQ8/HL2W7enzrNPaTuZo98oNizZf3yntkhjt8j28Joz4FtDz8tzURskSH/Un4fc0NUquoLZ6IHmwdw2lzmq8pYNBUqOcd++A8wfsMIL1x6b/LvA3d6bUDPC7FvNERfW/FtIEt7R+lP+xohg1jGrIEDtBIUDCQG4RGFziX/60XLBTKCDGmsRZWwwGdW/VtBMuYGnFMrZbS8e0yPevgSTn5N1hwBs42QGr9TJc4ohiSCG3RqKTV4RbQYrF0yeKAR5+6NpktePlOTUT+SsgZ2Y7l/29nUzgOlIIklJs16quCy3xrZMUwpUhi1QfUQaRYLRdeXQbyxNjNw9wWxYPOycddSIfhgYBDVTSqceu36r6kRKykjdTG7Qnv40v6l+siXQoXooaq5Isv8YKTgro7mlrnrnEnyrlgOM99hG1ZEHXQWJ3PiG9fUVCiCpoz/8DIquSypC1P3D2XacV0jmEv+esSQqRU20qFPYs/CeM/Wi6Lyn9Og/rHwPf7ZyLMyqclAgAw0U/hp5k+KZkpIEJrkJaBC8ayOnWcJA/odgQPzIif3zlb4n2WXmLwy3KWK0HPCr7xhRJbspNdVI0kqx2BgZ+CMO8N+NVfgRWlZmq5bpbhiUun0j2XXfMSyTACUUodx/c5b1dLPcOqgxFdRnO4QjVgv8uHgp/F2b8IQ/cWR9NttkuCl7Rlajl71IoigPatNooSSr62SOE0FnAXay4DilPZxlauAER0UCPkif7uB6KAQXTTZ+AP8Fri9Gd2coERPNOTk8t2aRoM6TOg0MgnKHdH9ip9P4y9/V1480KqdC3RtwVpEMwZcYR3yV8TBOJkZCXZPFL/ybL7eHqnHEmzp45QdjGrM/do+OPk VQI95Gpi ADo+ASXKUDc7qEkue20s2p5pO6SluNM/DaaPQoHQFLuFrkDMYNdg3nzPTzLQGcBvoo7gIkHG7D2y7JmTpptZVAlel8sS4dmabnMVgQw1mBNNFinTHGCgXrOk5iQcxEGPFCAS/28abe3TYzdhS4cSocckGVrTxHHsGKWwZsoLqsK2+2VWXKIyRSwjRCOyJr8HtDzsB X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When memory is being placed, mmap() will take care to respect the guard gaps of certain types of memory (VM_SHADOWSTACK, VM_GROWSUP and VM_GROWSDOWN). In order to ensure guard gaps between mappings, mmap() needs to consider two things: 1. That the new mapping isn’t placed in an any existing mappings guard gaps. 2. That the new mapping isn’t placed such that any existing mappings are not in *its* guard gaps. The long standing behavior of mmap() is to ensure 1, but not take any care around 2. So for example, if there is a PAGE_SIZE free area, and a mmap() with a PAGE_SIZE size, and a type that has a guard gap is being placed, mmap() may place the shadow stack in the PAGE_SIZE free area. Then the mapping that is supposed to have a guard gap will not have a gap to the adjacent VMA. Now that the vm_flags is passed into the arch get_unmapped_area()'s, and vm_unmapped_area() is ready to consider it, have VM_SHADOW_STACK's get guard gap consideration for scenario 2. Signed-off-by: Rick Edgecombe --- arch/x86/kernel/sys_x86_64.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c index 75966afb6251..01d7cd85ef97 100644 --- a/arch/x86/kernel/sys_x86_64.c +++ b/arch/x86/kernel/sys_x86_64.c @@ -112,6 +112,14 @@ static void find_start_end(unsigned long addr, unsigned long flags, *end = task_size_64bit(addr > DEFAULT_MAP_WINDOW); } +static inline unsigned long stack_guard_placement(vm_flags_t vm_flags) +{ + if (vm_flags & VM_SHADOW_STACK) + return PAGE_SIZE; + + return 0; +} + unsigned long arch_get_unmapped_area_vmflags(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags, vm_flags_t vm_flags) @@ -141,6 +149,7 @@ arch_get_unmapped_area_vmflags(struct file *filp, unsigned long addr, unsigned l info.low_limit = begin; info.high_limit = end; info.align_offset = pgoff << PAGE_SHIFT; + info.start_gap = stack_guard_placement(vm_flags); if (filp) { info.align_mask = get_align_mask(); info.align_offset += get_align_bits(); @@ -190,6 +199,7 @@ arch_get_unmapped_area_topdown_vmflags(struct file *filp, unsigned long addr0, info.low_limit = PAGE_SIZE; info.high_limit = get_mmap_base(0); + info.start_gap = stack_guard_placement(vm_flags); /* * If hint address is above DEFAULT_MAP_WINDOW, look for unmapped area From patchwork Tue Mar 26 02:16:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 13603268 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08752C54E58 for ; Tue, 26 Mar 2024 02:17:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0AED36B009C; Mon, 25 Mar 2024 22:17:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 033A56B009D; Mon, 25 Mar 2024 22:17:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CDD0E6B009E; Mon, 25 Mar 2024 22:17:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id B995A6B009C for ; Mon, 25 Mar 2024 22:17:30 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 948141C06C3 for ; Tue, 26 Mar 2024 02:17:30 +0000 (UTC) X-FDA: 81937578660.08.FDF48E2 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by imf11.hostedemail.com (Postfix) with ESMTP id AA10740006 for ; Tue, 26 Mar 2024 02:17:28 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Q4sw9epN; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf11.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711419449; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=NFCSdAhrunIzMrgwpPIPHm0AfcuiyWMeHA0mu0m/29Q=; b=Y8zHFHSdr5mITiWZhyITZUXmnOSOZPcH6iDi7+iBB95eRArETOgy2AJJRyvECCK/F/kTqa jU7gMhMB5p5FL6i9RAJPPZkv3I9xEBP9ly/ScZboA2vaGs8c0MR06jrRa0IxMyczi0vBCn ow/5a6KA32vE/S275uVl77oRzVRkMBw= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Q4sw9epN; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf11.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711419449; a=rsa-sha256; cv=none; b=kaOGTL01d8C7rvZInQrrXYL4J1c0w6a8OjzpV8gPl8D5bHY1YP3+MpODHojApW7WSagKM5 ZIF3B60runapd2xBfReC9era6hw3oWbRfPbflVbUJVwNOi5oWucwSM4kDOOTj2lBOcNoqV yhjXeM8KB3WsBdc8Dy2FGoCe58AySWk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711419449; x=1742955449; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9Kz1B1g9YBWVQVBA7u5n8sBfRhBddgPHEWEy/INdrdY=; b=Q4sw9epNbeSaibFvId3kVf/gSkl2yhf6NALaEgYfSW9heEFdtYAfuHvu zsji9LSgaAa3T2U93L5GdAgYGzIwQ7SEvlu9nLMqpmwyv3eC+wCz3jwf3 rmfc3VynuzFqkXrHsZf7hypGCb0r85PEiMc1/BquIDjGLwycoQgs+6in4 flkurXIZ4ZLczJ/P7/qRmpH0r0JGwgI/aZB7gr1Wc97ISZyqtoO7LjKG/ N3aJfr8WHOW9cMw264B/m/uSopLN9TUc5ZKc4QyKICzfQOlAOrsa/NfAl +MUHfUGPab95gqwCOpZ5iPvxMy5maiemyvZTJikOXrZpX+Z58HeRieRXY Q==; X-IronPort-AV: E=McAfee;i="6600,9927,11024"; a="6564396" X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="6564396" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:18 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="20489921" Received: from rpwilson-mobl.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.251.11.187]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:18 -0700 From: Rick Edgecombe To: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, christophe.leroy@csgroup.eu, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, tglx@linutronix.de, x86@kernel.org Cc: rick.p.edgecombe@intel.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v4 14/14] selftests/x86: Add placement guard gap test for shstk Date: Mon, 25 Mar 2024 19:16:56 -0700 Message-Id: <20240326021656.202649-15-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240326021656.202649-1-rick.p.edgecombe@intel.com> References: <20240326021656.202649-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: AA10740006 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: xk43azfu7yax8o7jbrj4yuqabn5zetj9 X-HE-Tag: 1711419448-806599 X-HE-Meta: U2FsdGVkX1/oe1OjtowMAF4my3vl7X+jmR/eyBpNZhdBj7tSkEGzYoabLXNOT1Xzwg0S6/odOfZ/DpzAgOeTqCqAYyhqjfopLrcp+AnOYjCPCyIsrMmUBpqM0vpdsD2WSLwiA6YzWAlFggqhHHYC3bFQnL4XO/2wbLIo2CmodWcSC4ZeK9JMID+HMxueazOz40AM+bzOYJlQXuDZ6hVYnFLDOH7mZUkArOZv9wXwEc7HKkfQyXOgikRHhKwugxzZLQ4qDbSDosqstqGL+ddL2cNiSJMFfB1Kaiw2zSytC6ddBfPCGcrGYImREOugLwfrTXadXXRkCVJ+CDNrvHTPf05m8G6BPFDeWzpwXRB6V2nKZdi4NopmVZynpOERs2qam1ryFJK11r4z8VRbaRAUXWk1iqm9+K+6riAawTqkEEO9/KWHwHSPSKwLyVAQN1/+EkuNC+YThytmBP5Vm3qw98o9Y6yzOoGjl3cldyGomxGKuGNTtebgsR+j11Bii3CJIV76TKXlLDNNVFbw/hroLXbOYa77RU6O1QfwV6V4jC9vu5Cr5kDbS+V5Ydu7QAZQNdW6MrDzxvPU+jHX9QWtxmZvc/XtqcEQ6RHSngO066CzHxx2YVzcbnLeCchZ9tuRFomQ9Yv+OKUDLz1jepWFtoUOtvDdxXo2e12Qlg0lG9SQngG+bRZVUBL5zquEJqCgGu9UPlncTxjeoLi3+8LY72g8DlkXl0BUMv2wGQSKbvXAbsuU4UL5+z/SkdCzVlrgwUou/DP66f7i2lyNb3jJzagMciTVvCW/55FAGuUm2SE3YDOLNXEw07eKtga/eZ6qH3reTjpgE4vl6GXI4XmLHl4HOr2Rv31GwZsimThZxPeENUrxKfV0BOWrzZQMe4iK X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The existing shadow stack test for guard gaps just checks that new mappings are not placed in an existing mapping's guard gap. Add one that checks that new mappings are not placed such that preexisting mappings are in the new mappings guard gap. Signed-off-by: Rick Edgecombe --- .../testing/selftests/x86/test_shadow_stack.c | 67 +++++++++++++++++-- 1 file changed, 63 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/x86/test_shadow_stack.c b/tools/testing/selftests/x86/test_shadow_stack.c index 757e6527f67e..ee909a7927f9 100644 --- a/tools/testing/selftests/x86/test_shadow_stack.c +++ b/tools/testing/selftests/x86/test_shadow_stack.c @@ -556,7 +556,7 @@ struct node { * looked at the shadow stack gaps. * 5. See if it landed in the gap. */ -int test_guard_gap(void) +int test_guard_gap_other_gaps(void) { void *free_area, *shstk, *test_map = (void *)0xFFFFFFFFFFFFFFFF; struct node *head = NULL, *cur; @@ -593,11 +593,64 @@ int test_guard_gap(void) if (shstk - test_map - PAGE_SIZE != PAGE_SIZE) return 1; - printf("[OK]\tGuard gap test\n"); + printf("[OK]\tGuard gap test, other mapping's gaps\n"); return 0; } +/* Tests respecting the guard gap of the mapping getting placed */ +int test_guard_gap_new_mappings_gaps(void) +{ + void *free_area, *shstk_start, *test_map = (void *)0xFFFFFFFFFFFFFFFF; + struct node *head = NULL, *cur; + int ret = 0; + + free_area = mmap(0, PAGE_SIZE * 4, PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); + munmap(free_area, PAGE_SIZE * 4); + + /* Test letting map_shadow_stack find a free space */ + shstk_start = mmap(free_area, PAGE_SIZE, PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); + if (shstk_start == MAP_FAILED || shstk_start != free_area) + return 1; + + while (test_map > shstk_start) { + test_map = (void *)syscall(__NR_map_shadow_stack, 0, PAGE_SIZE, 0); + if (test_map == MAP_FAILED) { + printf("[INFO]\tmap_shadow_stack MAP_FAILED\n"); + ret = 1; + break; + } + + cur = malloc(sizeof(*cur)); + cur->mapping = test_map; + + cur->next = head; + head = cur; + + if (test_map == free_area + PAGE_SIZE) { + printf("[INFO]\tNew mapping has other mapping in guard gap!\n"); + ret = 1; + break; + } + } + + while (head) { + cur = head; + head = cur->next; + munmap(cur->mapping, PAGE_SIZE); + free(cur); + } + + munmap(shstk_start, PAGE_SIZE); + + if (!ret) + printf("[OK]\tGuard gap test, placement mapping's gaps\n"); + + return ret; +} + /* * Too complicated to pull it out of the 32 bit header, but also get the * 64 bit one needed above. Just define a copy here. @@ -850,9 +903,15 @@ int main(int argc, char *argv[]) goto out; } - if (test_guard_gap()) { + if (test_guard_gap_other_gaps()) { ret = 1; - printf("[FAIL]\tGuard gap test\n"); + printf("[FAIL]\tGuard gap test, other mappings' gaps\n"); + goto out; + } + + if (test_guard_gap_new_mappings_gaps()) { + ret = 1; + printf("[FAIL]\tGuard gap test, placement mapping's gaps\n"); goto out; }