From patchwork Tue Mar 26 02:16:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 13603260 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CC19C54E58 for ; Tue, 26 Mar 2024 02:17:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 404A66B0089; Mon, 25 Mar 2024 22:17:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 390236B008A; Mon, 25 Mar 2024 22:17:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1B9206B0092; Mon, 25 Mar 2024 22:17:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 0578C6B0089 for ; Mon, 25 Mar 2024 22:17:23 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id CA77CA0B14 for ; Tue, 26 Mar 2024 02:17:22 +0000 (UTC) X-FDA: 81937578324.14.EECAEDF Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) by imf11.hostedemail.com (Postfix) with ESMTP id EE70540004 for ; Tue, 26 Mar 2024 02:17:20 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=T58uWeek; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf11.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711419441; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=elbBxxifLAFPS4IEf/BcjZRxiLxQh7YP6fHqF4IoEZk=; b=g7Jsi/AkRXZi5i9kZsO8VXWjrbWQP8MNnhjxodjLsHYtq0LsLufRJoJiTheDXXrmmyvm65 BKkraeQGR3ozKKV4KLuco6EEZUlosSQWhHnmBYdcW3+hM/l67AVcfmmA+IcUAa3Um1CW2k OhFnv5TmIoG/Y7QCzf07LU4uUN48Gyc= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=T58uWeek; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf11.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 198.175.65.16 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711419441; a=rsa-sha256; cv=none; b=zI9AiF+f006ME6+c+ldl97W3+bfz+Q6L1dWylTl6bVWtdNUmvcCnHs8e83Z8UDBbTtBPKF 0gmay9uF9D9OQt5Whisb8ybtMiIQzE4INUj+vzuKZ2eTOprs7uvRBfQSPNYW2E6wnd2HEk 6BcXKFH+8Hx+GIUVfLGzcxjTYnMWir0= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711419441; x=1742955441; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lEMD/P3qLo8sANIiWTYyDSHpg8TQh31/i79RKWObRQk=; b=T58uWeekzdVx1E9qDj/blTq2OIVm/FxSMyaEy2XPEBbVRUQsDH+tbtkZ QKGFVIhad62hBfLJxMRoji0rAFHd3o13kwlLG9SYPkY/o9IElRzBzh9A1 whhgrp458kzr/Q0q5LHazpGM+/Z0G0BBfyL7EXx40LrfrFG/GK25rdqWo 4SNLzgQaQJI5uyzsrxxod6U4s4AASb7YY9YJQeAIJRP1WjhKAcpnHGH8y RViq6AIACQq3xi/gTj3G+RRQGA2rd9yaG/jLtbsntoTfOclBPgoaJoHYZ fp2nR78oZ34RW170kplrpqx6q1zoIjugDYbzW3TYeBR3KQt7iL5Z7vw+K w==; X-IronPort-AV: E=McAfee;i="6600,9927,11024"; a="6564276" X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="6564276" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,154,1708416000"; d="scan'208";a="20489886" Received: from rpwilson-mobl.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.251.11.187]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2024 19:17:15 -0700 From: Rick Edgecombe To: Liam.Howlett@oracle.com, akpm@linux-foundation.org, bp@alien8.de, broonie@kernel.org, christophe.leroy@csgroup.eu, dave.hansen@linux.intel.com, debug@rivosinc.com, hpa@zytor.com, keescook@chromium.org, kirill.shutemov@linux.intel.com, luto@kernel.org, mingo@redhat.com, peterz@infradead.org, tglx@linutronix.de, x86@kernel.org Cc: rick.p.edgecombe@intel.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v4 06/14] thp: Add thp_get_unmapped_area_vmflags() Date: Mon, 25 Mar 2024 19:16:48 -0700 Message-Id: <20240326021656.202649-7-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240326021656.202649-1-rick.p.edgecombe@intel.com> References: <20240326021656.202649-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: EE70540004 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: fmmjkkhyxyuhr5bomp5z8qdm37c4bei3 X-HE-Tag: 1711419440-882224 X-HE-Meta: U2FsdGVkX18JTmsvKuqC3OGoaDCi0DjWmKO5N42EHowa/rCpSQgkZPzpsa6QWUk6bhCxeE/CY2gez893TFsLdrhFlj2Di+a/bZY8x/9pNgRVtEs/p4N61lKuvU6BaFdDZs+LMPMhYn7IzXQOHgMxNMEBcxTquk5KsKGbc1x6sTSzDYRcZBKqKPywmS6eb1LsUQL5HJ7UJGgnpQkzh8TggZh9m7rN6s/E5z8+3nDjTP1CJ8fLhcexci73EoqivfqPzOed5J95AHQaqyF4VEZu99xAX/YVu6oBIxt9St1SmTEERzDXqGwLZ6HRcX42JRASFsyBWFna48nsPgtGggjP+aFIWnv9f9myTV4fk4sW3Ya43dhgwfjyv1ksOsQBq8mS+ArCDrwBsbYxzH4OyIeonf2sDQx3/4Xmz0oQ2/9dY7rJydyVMiyb3QqvfKpHbsD+tMLDul8WwsqBEYvRvuYy1KPEkzQgMmmzn3mBfEA5H8IoPwAoYJKpCfy1SI9Qek7WSMwlE7Qk34Bw2gDH5mxDL8RNYkMjqEegp3w8V+L79W34eUDGj5UE7JB7b/8fSxqavKl70uj7Sl3s5/vHMTEcntBbSqPDmNx+dn6as7ytKyItAcdlI+x/rgzGYJbXE0NO1ePUjAfR3OyQqmeDpS0PVdk6bGCJ0IfzkFNQmWA1blH0kiDYQBEq6KNEBz8jiCmWNSbNKdI9G48rSXaeScHYxlshIeyAtQ/DNifX2AzfK7WKbWkjQEaB3bgLyMM5H6CSQT8g4/LtRWj1gINZSVU7ZEBAGEMw+DB1SVo40dy4u5r64xPyPjcfspVRYlUdIj6A7qFRGadWDwGLod24BLHTbzyF1crjHBJFw32u5M9SyZPb9/393kYxTwY4z3y4YOe1RhepJkqkVI0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When memory is being placed, mmap() will take care to respect the guard gaps of certain types of memory (VM_SHADOWSTACK, VM_GROWSUP and VM_GROWSDOWN). In order to ensure guard gaps between mappings, mmap() needs to consider two things: 1. That the new mapping isn’t placed in an any existing mappings guard gaps. 2. That the new mapping isn’t placed such that any existing mappings are not in *its* guard gaps. The long standing behavior of mmap() is to ensure 1, but not take any care around 2. So for example, if there is a PAGE_SIZE free area, and a mmap() with a PAGE_SIZE size, and a type that has a guard gap is being placed, mmap() may place the shadow stack in the PAGE_SIZE free area. Then the mapping that is supposed to have a guard gap will not have a gap to the adjacent VMA. Add a THP implementations of the vm_flags variant of get_unmapped_area(). Future changes will call this from mmap.c in the do_mmap() path to allow shadow stacks to be placed with consideration taken for the start guard gap. Shadow stack memory is always private and anonymous and so special guard gap logic is not needed in a lot of caseis, but it can be mapped by THP, so needs to be handled. Signed-off-by: Rick Edgecombe Reviewed-by: Christophe Leroy --- include/linux/huge_mm.h | 11 +++++++++++ mm/huge_memory.c | 23 ++++++++++++++++------- mm/mmap.c | 12 +++++++----- 3 files changed, 34 insertions(+), 12 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index de0c89105076..cc599de5e397 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -262,6 +262,9 @@ unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma, unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags); +unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long addr, + unsigned long len, unsigned long pgoff, unsigned long flags, + vm_flags_t vm_flags); void folio_prep_large_rmappable(struct folio *folio); bool can_split_folio(struct folio *folio, int *pextra_pins); @@ -417,6 +420,14 @@ static inline void folio_prep_large_rmappable(struct folio *folio) {} #define thp_get_unmapped_area NULL +static inline unsigned long +thp_get_unmapped_area_vmflags(struct file *filp, unsigned long addr, + unsigned long len, unsigned long pgoff, + unsigned long flags, vm_flags_t vm_flags) +{ + return 0; +} + static inline bool can_split_folio(struct folio *folio, int *pextra_pins) { diff --git a/mm/huge_memory.c b/mm/huge_memory.c index cede9ccb84dc..b29f3e456888 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -808,7 +808,8 @@ static inline bool is_transparent_hugepage(struct folio *folio) static unsigned long __thp_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, - loff_t off, unsigned long flags, unsigned long size) + loff_t off, unsigned long flags, unsigned long size, + vm_flags_t vm_flags) { loff_t off_end = off + len; loff_t off_align = round_up(off, size); @@ -824,8 +825,8 @@ static unsigned long __thp_get_unmapped_area(struct file *filp, if (len_pad < len || (off + len_pad) < off) return 0; - ret = mm_get_unmapped_area(current->mm, filp, addr, len_pad, - off >> PAGE_SHIFT, flags); + ret = mm_get_unmapped_area_vmflags(current->mm, filp, addr, len_pad, + off >> PAGE_SHIFT, flags, vm_flags); /* * The failure might be due to length padding. The caller will retry @@ -850,17 +851,25 @@ static unsigned long __thp_get_unmapped_area(struct file *filp, return ret; } -unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr, - unsigned long len, unsigned long pgoff, unsigned long flags) +unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long addr, + unsigned long len, unsigned long pgoff, unsigned long flags, + vm_flags_t vm_flags) { unsigned long ret; loff_t off = (loff_t)pgoff << PAGE_SHIFT; - ret = __thp_get_unmapped_area(filp, addr, len, off, flags, PMD_SIZE); + ret = __thp_get_unmapped_area(filp, addr, len, off, flags, PMD_SIZE, vm_flags); if (ret) return ret; - return mm_get_unmapped_area(current->mm, filp, addr, len, pgoff, flags); + return mm_get_unmapped_area_vmflags(current->mm, filp, addr, len, pgoff, flags, + vm_flags); +} + +unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr, + unsigned long len, unsigned long pgoff, unsigned long flags) +{ + return thp_get_unmapped_area_vmflags(filp, addr, len, pgoff, flags, 0); } EXPORT_SYMBOL_GPL(thp_get_unmapped_area); diff --git a/mm/mmap.c b/mm/mmap.c index 68b5bfcebadd..f734e4fa6d94 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1861,20 +1861,22 @@ __get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, * so use shmem's get_unmapped_area in case it can be huge. */ get_area = shmem_get_unmapped_area; - } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) { - /* Ensures that larger anonymous mappings are THP aligned. */ - get_area = thp_get_unmapped_area; } /* Always treat pgoff as zero for anonymous memory. */ if (!file) pgoff = 0; - if (get_area) + if (get_area) { addr = get_area(file, addr, len, pgoff, flags); - else + } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) { + /* Ensures that larger anonymous mappings are THP aligned. */ + addr = thp_get_unmapped_area_vmflags(file, addr, len, + pgoff, flags, vm_flags); + } else { addr = mm_get_unmapped_area_vmflags(current->mm, file, addr, len, pgoff, flags, vm_flags); + } if (IS_ERR_VALUE(addr)) return addr;