From patchwork Thu Feb 15 23:13:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 13559263 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62D0CC48BC4 for ; Thu, 15 Feb 2024 23:15:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8B6DE6B0092; Thu, 15 Feb 2024 18:14:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 806946B0098; Thu, 15 Feb 2024 18:14:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 14F0C6B0092; Thu, 15 Feb 2024 18:14:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id B8EB76B008A for ; Thu, 15 Feb 2024 18:14:46 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 71844A27DE for ; Thu, 15 Feb 2024 23:14:46 +0000 (UTC) X-FDA: 81795594972.24.659D703 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) by imf16.hostedemail.com (Postfix) with ESMTP id 69DAE180003 for ; Thu, 15 Feb 2024 23:14:44 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Erd4U6+U; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf16.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 198.175.65.21 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1708038884; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Kh/0VZ1saSSmU90+yAC133lTBVMc2tEcq5B/zah2PU4=; b=29qNEvKKvpnRymCV9lRAhUNVDjmFJoIMVvEJtYuiZYQFAiSx+lMoAiyDsjGAc7oImeXL2T rA3wtEjpGLsdudOhOUbJY2Nf32ySQxd+UwwV2pVQTGwO/BX1wwV4AVhfC4opvUxUDhkA/V vyn7EIxVjZhzCJo8KmK8yUehTuu4nLo= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Erd4U6+U; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf16.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 198.175.65.21 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1708038884; a=rsa-sha256; cv=none; b=SvFMilzDefhZm5m7d3q8/WTdX+b4DHXNETwpBKgVc2JZWCWMvpeXdid4v+2vpbdwpkNQ4L VrcBrXqaBiFtkjBiKKbBr+QIuK/mamJyaPn1QPbYE7SHpx9tEqaKuzFUMBFTwGyjhjFo8E 5799HNi+uG+vHYz2xJSXpBid3Sry2fo= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1708038885; x=1739574885; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Ib9DRZGDBCSUD4uXPmR+ijcvzy4h1J3AkaOgyP6jQrA=; b=Erd4U6+Uxpr4yI/S9dhEmoBJsTpQRrdoO6ADCgYVx+ZxM+4T94/KBqFX /xPk4qBOwMwBtabRMChHjFus3EdBXSzE60pF09IkBOh5iuGBKuj98pom2 HpO9fZF4q+ZjIgrgqFI0S5TxGLaCAiwEiS3DikuP9BRyD+Oqbu91YVUIe UwhnRJ+22bGnZFZ6VDoutuM40SPePcSGNiNOiU6nWOFvqCQWPa8gGJ0QM VMPUmxZ1GZvrZuT9nXMJj61wniAHmMt+1IObFWfQ73GJSD9CkJQ/mUPNf XCJ1dvlzTssMsdc0WyVRApwQ/bIQyWeJ4cTyEvBgMLqHowcbli/TjM0lO Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10985"; a="2066360" X-IronPort-AV: E=Sophos;i="6.06,162,1705392000"; d="scan'208";a="2066360" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Feb 2024 15:14:41 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10985"; a="912250200" X-IronPort-AV: E=Sophos;i="6.06,162,1705392000"; d="scan'208";a="912250200" Received: from yshin-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.209.95.133]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Feb 2024 15:14:40 -0800 From: Rick Edgecombe To: Liam.Howlett@oracle.com, akpm@linux-foundation.org, debug@rivosinc.com, broonie@kernel.org, kirill.shutemov@linux.intel.com, keescook@chromium.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, luto@kernel.org, peterz@infradead.org, hpa@zytor.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: rick.p.edgecombe@intel.com Subject: [RFC PATCH 5/8] mm: Take placement mappings gap into account Date: Thu, 15 Feb 2024 15:13:29 -0800 Message-Id: <20240215231332.1556787-6-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240215231332.1556787-1-rick.p.edgecombe@intel.com> References: <20240215231332.1556787-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 69DAE180003 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: ip5t95zikr7xgndydyoqmwnpof5h8qmd X-HE-Tag: 1708038884-346097 X-HE-Meta: U2FsdGVkX19XP2hvwqbj7h7k9aeoDErCIxfw5zgwEBpDE9RwrGlN7EG5CG9WRclrWZmm3WWcbziyCIVli/xJNtt9kVpRSMsU+c9CpVGSu8Rpau86arTamv7RbQ7PMvX2uyGO/Z+L3+aiL+8wpLICak8xwdVtn+T8NGJd8J/Iyg1GXGSbZ8jAHaCBnhvDN/SugCRyvK2lXjaz3/LXKScaJVheiGpVyBrUqJj1lv8D+3pqTIjk+YTo/6SSTvqrFAKobhEkOBPt4N8VHkLGrnhZkUgPZrUpEJNw0umi0Qy0VvsmTPvI90RaAkqTvdAUWxwSETBReoRawbBNxBeP/NHl/GHqRGLdMCdamPGdjFm88mKiQzWlK4n7dq7g98WWT9UWFI0Lsa6f2WCL54OTxDwvJwToTwvofY0hND6dHf+QzDUJGaatoSCw2YS/XLGgiud75VjXF9gd/c194ONNTAtCS73QEocWUwTS+SGspBkPIdRMKUraSGwwoepGWBCgQpiZurZE/xuobGG3df+pS+kDs8ni8CW45sF3hKkd0zBFXRKDovSlxqbyesjTAfMRtZ6sQrbsBMAE16voQtGsJ8g23OTIk9pbpS75LiTPdqUKZXyDrDZ63WG9Qf0ZPf3OI903YNzC2ub1XCNhsXJ6MiaFpCVSSSCGJur3RlGcfwzybAtHH8eRfQLiGcHrhfwmAbmfLh1BraTbDw0StsAQXWUfmLCndaM5aB9TJ+fUJCSvMNSgGgN3i4sBBIZCl7HOF55RWTd9ZXqcC1u8zs7fJ8/sUrtk1ZFcbC7tPCEFej7ixOdgBbCjf6MhcxWXv3wbhMqUwPDyg+l1XnuBeacrzev36IzpajorM75GEStvOP/A+eYIrfrhEd1RGq4RtM+6sPAXjnA1ns/iJXzbujhdEZZLj/faYfGYjYUu7ZJ1e8fT7Ggtj1mFV/QQ6Y8+XBq8stBrQcYbNiCvpzLn9wxqvG7 OGE79g7w Elz+5YXLnO0dX7Dl9LEWOPB5DGwnZIOMN+2dyG1oNCHC2bODYsM/WuVl5MfnCIFR8zzaSLhmiT1fL/t2W6R/8O/iEtszongcRdet7+2eaZJxcevM0iZNj4VV2FcdJAyjDQsdFFyRkC7FyemVYs1AcNeODJH1v3ryJvcuyot0TaEoQnmMFd3+mgq5EbDfE4Xxh2RXl X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When memory is being placed, mmap() will take care to respect the guard gaps of certain types of memory (VM_SHADOWSTACK, VM_GROWSUP and VM_GROWSDOWN). In order to ensure guard gaps between mappings, mmap() needs to consider two things: 1. That the new mapping isn’t placed in an any existing mappings guard gaps. 2. That the new mapping isn’t placed such that any existing mappings are not in *its* guard gaps. The long standing behavior of mmap() is to ensure 1, but not take any care around 2. So for example, if there is a PAGE_SIZE free area, and a mmap() with a PAGE_SIZE size, and a type that has a guard gap is being placed, mmap() may place the shadow stack in the PAGE_SIZE free area. Then the mapping that is supposed to have a guard gap will not have a gap to the adjacent VMA. For MAP_GROWSDOWN/VM_GROWSDOWN and MAP_GROWSUP/VM_GROWSUP this has not been a problem in practice because applications place these kinds of mappings very early, when there is not many mappings to find a space between. But for shadow stacks, they may be placed throughout the lifetime of the application. So define a VM_UNMAPPED_START_GAP_SET flag to specify that a start_gap field has been set, as most vm_unmapped_area_info structs are not zeroed, so the added field will often contain garbage. Use VM_UNMAPPED_START_GAP_SET in unmapped_area/_topdown() to find a space that includes the guard gap for the new mapping. Take care to not interfere with the alignment. Signed-off-by: Rick Edgecombe --- include/linux/mm.h | 2 ++ mm/mmap.c | 21 ++++++++++++++------- 2 files changed, 16 insertions(+), 7 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 9addf16dbf18..160bb6db7a16 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3393,12 +3393,14 @@ extern unsigned long __must_check vm_mmap(struct file *, unsigned long, struct vm_unmapped_area_info { #define VM_UNMAPPED_AREA_TOPDOWN 1 +#define VM_UNMAPPED_START_GAP_SET 2 unsigned long flags; unsigned long length; unsigned long low_limit; unsigned long high_limit; unsigned long align_mask; unsigned long align_offset; + unsigned long start_gap; }; extern unsigned long vm_unmapped_area(struct vm_unmapped_area_info *info); diff --git a/mm/mmap.c b/mm/mmap.c index 936d728ba1ca..1b6c333656f9 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1567,14 +1567,17 @@ static inline int accountable_mapping(struct file *file, vm_flags_t vm_flags) */ static unsigned long unmapped_area(struct vm_unmapped_area_info *info) { - unsigned long length, gap; + unsigned long length, gap, start_gap = 0; unsigned long low_limit, high_limit; struct vm_area_struct *tmp; MA_STATE(mas, ¤t->mm->mm_mt, 0, 0); + if (info->flags & VM_UNMAPPED_START_GAP_SET) + start_gap = info->start_gap; + /* Adjust search length to account for worst case alignment overhead */ - length = info->length + info->align_mask; + length = info->length + info->align_mask + start_gap; if (length < info->length) return -ENOMEM; @@ -1586,7 +1589,7 @@ static unsigned long unmapped_area(struct vm_unmapped_area_info *info) if (mas_empty_area(&mas, low_limit, high_limit - 1, length)) return -ENOMEM; - gap = mas.index; + gap = mas.index + start_gap; gap += (info->align_offset - gap) & info->align_mask; tmp = mas_next(&mas, ULONG_MAX); if (tmp && (tmp->vm_flags & VM_STARTGAP_FLAGS)) { /* Avoid prev check if possible */ @@ -1619,13 +1622,17 @@ static unsigned long unmapped_area(struct vm_unmapped_area_info *info) */ static unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info) { - unsigned long length, gap, gap_end; + unsigned long length, gap, gap_end, start_gap = 0; unsigned long low_limit, high_limit; struct vm_area_struct *tmp; MA_STATE(mas, ¤t->mm->mm_mt, 0, 0); + + if (info->flags & VM_UNMAPPED_START_GAP_SET) + start_gap = info->start_gap; + /* Adjust search length to account for worst case alignment overhead */ - length = info->length + info->align_mask; + length = info->length + info->align_mask + start_gap; if (length < info->length) return -ENOMEM; @@ -1832,7 +1839,7 @@ unsigned long mm_get_unmapped_area_vmflags(struct mm_struct *mm, struct file *fi unsigned long __get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, - unsigned long pgoff, unsigned long flags, vm_flags_t vm_flags) + unsigned long pgoff, unsigned long flags, vm_flags_t vm_flags) { unsigned long (*get_area)(struct file *, unsigned long, unsigned long, unsigned long, unsigned long) @@ -1883,7 +1890,7 @@ __get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, unsigned long get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, - unsigned long pgoff, unsigned long flags) + unsigned long pgoff, unsigned long flags) { return __get_unmapped_area(file, addr, len, pgoff, flags, 0); }