From patchwork Sat Mar 2 00:17:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 13579297 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D9628C5475B for ; Sat, 2 Mar 2024 00:17:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ic/0bb7X4MIE4WTAoS9QZPGkMGhT2inyLvpKE5BQeMI=; b=sJWOYKDJjZAApr 35nvpDYEGm+v54eA7IDV4bjWgO/8c2rC/cEXoq892kaSDjhGK+jEUvhB2D37RKPEl3IPayRnIktgl MrLgXVUfQcSlykt7UiYf8qAhXwyWHSF1Y6dy5hPOteAR5Ft7Hr/4T5u3NiW5bKPvc99PmipIRCcxu 2DiEEbgqj6ZCgYfEioGMMCMAmRKaE76rHMRQdPS09LyTNHS+mgeOmzAKSLd3+32vSopGLtkQkEs3g U0Cbedj5lwJn/ECChCFt0ynqUJxJ+455+1Kv3IK5G9IzYJfi44Ic+7Z5ud+MZnezT+Cm9/obPiq/Y GF42mp8clXLi7Pp5WJnA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rgD4M-00000002M9G-2sV4; Sat, 02 Mar 2024 00:17:38 +0000 Received: from mgamail.intel.com ([198.175.65.15]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rgD4I-00000002LyU-05u1 for linux-arm-kernel@lists.infradead.org; Sat, 02 Mar 2024 00:17:36 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1709338654; x=1740874654; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ISJXgR86btvXlGtTVHkZL4KU3RGpJLuMEprQEr/bPxQ=; b=IkiWR4TsrnE0PCVshkZ5ETReFCZ+SPJBg3t/UI/8fmU1My3VwQFSkO/g HEwNdYLZIsGM4tz9gbFCn1H7V3pVbVYzRIBek7Z6Hzg6iltGA+PQryEDt pmwtLyaNe8OcP4/i/0KXfY8yYOH1iBp3mtdpx5tZ++VAodKsifI65I7kS ZgFWJwblDsXz/VM3XYPlrEfqi49xXbUmU/aQB90ydjnRiRrO+3Ia8bxWh ZJIkIL5AcdZn5DH9IF5Hfk35plPeDsetPOHif18aCaohNyBKOdq75Rj6v UT6csgMWzS9iQxZhFOBRxll7dH2RCmKSbHE+P0hnCUSUSqjDNl19j4JYP g==; X-IronPort-AV: E=McAfee;i="6600,9927,11000"; a="7715685" X-IronPort-AV: E=Sophos;i="6.06,197,1705392000"; d="scan'208";a="7715685" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by orvoesa107.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Mar 2024 16:17:22 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,197,1705392000"; d="scan'208";a="8495954" Received: from thwood-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.251.7.75]) by fmviesa008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Mar 2024 16:17:22 -0800 From: Rick Edgecombe To: rick.p.edgecombe@intel.com Cc: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, sparclinux@vger.kernel.org, tglx@linutronix.de, x86@kernel.org, Russell King , linux-arm-kernel@lists.infradead.org Subject: [RFC v2.1 02/12] ARM: Use initializer for struct vm_unmapped_area_info Date: Fri, 1 Mar 2024 16:17:04 -0800 Message-Id: <20240302001714.674091-2-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240302001714.674091-1-rick.p.edgecombe@intel.com> References: <20240226190951.3240433-6-rick.p.edgecombe@intel.com> <20240302001714.674091-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240301_161734_257200_F0E2AFD0 X-CRM114-Status: GOOD ( 17.58 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Future changes will need to add a new member to struct vm_unmapped_area_info. This would cause trouble for any call site that doesn't initialize the struct. Currently every caller sets each field manually, so if new fields are added they will be unitialized and the core code parsing the struct will see garbage in the new field. It could be possible to initialize the new field manually to 0 at each call site. This and a couple other options were discussed, and the consensus (see links) was that in general the best way to accomplish this would be via static initialization with designated field initiators. Having some struct vm_unmapped_area_info instances not zero initialized will put those sites at risk of feeding garbage into vm_unmapped_area() if the convention is to zero initialize the struct and any new field addition misses a call site that initializes each field manually. It could be possible to leave the code mostly untouched, and just change the line: struct vm_unmapped_area_info info to: struct vm_unmapped_area_info info = {}; However, that would leave cleanup for the fields that are manually set to zero, as it would no longer be required. So to be reduce the chance of bugs via uninitialized fields, instead simply continue the process to initialize the struct this way tree wide. This will zero any unspecified members. Move the field initializers to the struct declaration when they are known at that time. Leave the fields out that were manually initialized to zero, as this would be redundant for designated initializers. Signed-off-by: Rick Edgecombe Cc: Russell King Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/lkml/202402280912.33AEE7A9CF@keescook/#t Link: https://lore.kernel.org/lkml/j7bfvig3gew3qruouxrh7z7ehjjafrgkbcmg6tcghhfh3rhmzi@wzlcoecgy5rs/ --- Hi, This patch was split and refactored out of a tree-wide change [0] to just zero-init each struct vm_unmapped_area_info. The overall goal of the series is to help shadow stack guard gaps. Currently, there is only one arch with shadow stacks, but two more are in progress. It is 0day tested only. Thanks, Rick [0] https://lore.kernel.org/lkml/20240226190951.3240433-6-rick.p.edgecombe@intel.com/ --- arch/arm/mm/mmap.c | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/arch/arm/mm/mmap.c b/arch/arm/mm/mmap.c index a0f8a0ca0788..f50c8ed98be0 100644 --- a/arch/arm/mm/mmap.c +++ b/arch/arm/mm/mmap.c @@ -34,7 +34,12 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, struct vm_area_struct *vma; int do_align = 0; int aliasing = cache_is_vipt_aliasing(); - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = { + .length = len, + .low_limit = mm->mmap_base, + .high_limit = TASK_SIZE, + .align_offset = pgoff << PAGE_SHIFT + }; /* * We only need to do colour alignment if either the I or D @@ -68,12 +73,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, return addr; } - info.flags = 0; - info.length = len; - info.low_limit = mm->mmap_base; - info.high_limit = TASK_SIZE; info.align_mask = do_align ? (PAGE_MASK & (SHMLBA - 1)) : 0; - info.align_offset = pgoff << PAGE_SHIFT; return vm_unmapped_area(&info); } @@ -87,7 +87,13 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, unsigned long addr = addr0; int do_align = 0; int aliasing = cache_is_vipt_aliasing(); - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = { + .flags = VM_UNMAPPED_AREA_TOPDOWN, + .length = len, + .low_limit = FIRST_USER_ADDRESS, + .high_limit = mm->mmap_base, + .align_offset = pgoff << PAGE_SHIFT + }; /* * We only need to do colour alignment if either the I or D @@ -119,12 +125,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0, return addr; } - info.flags = VM_UNMAPPED_AREA_TOPDOWN; - info.length = len; - info.low_limit = FIRST_USER_ADDRESS; - info.high_limit = mm->mmap_base; info.align_mask = do_align ? (PAGE_MASK & (SHMLBA - 1)) : 0; - info.align_offset = pgoff << PAGE_SHIFT; addr = vm_unmapped_area(&info); /*