From patchwork Mon Mar 7 12:24:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joao Martins X-Patchwork-Id: 12771981 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7E2E53B53 for ; Mon, 7 Mar 2022 15:07:51 +0000 (UTC) Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 227BjMgb009281; Mon, 7 Mar 2022 12:25:14 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : content-type : mime-version; s=corp-2021-07-09; bh=tSZQk7HtTV6HKgfK4W2uW+38Fl91pLiEqT8Zv3n42NA=; b=OEbR6mJhsmj0kOaRVGxfV3cnnR7El5b2qO3hGFI9bRB90jFFn/FQPBQ9NOpntVkGU6cL utmc/dWwlZLwf+ytdT45XBI9tKlUmpNxe/9a3IxgWzy/0CmMiDtXJcqZ4Nh2cWVLaV0b CHJTsoQdVkqdYSEA5VGyL+yUQe8HaaVgUh1Bhp60FWiksIHCxUw2D48HwCe4wJu2eM5A 9kz5J+zG+1A4Y+l1twyXW6HKkQsDo/1JcX3XDYcUsnNNvEJWnGFbwo9lA6+k1dRpko4k rWXt9dOzp3EGdIN5IFGHAqrn7Mk6RZgyiHJNvltZ8zElqnRQQnxQo5v6s/0JviktZu1k yQ== Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71]) by mx0b-00069f02.pphosted.com with ESMTP id 3ekxf0kqbw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 07 Mar 2022 12:25:13 +0000 Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1]) by aserp3030.oracle.com (8.16.1.2/8.16.1.2) with SMTP id 227CBWv8009482; Mon, 7 Mar 2022 12:25:13 GMT Received: from nam12-mw2-obe.outbound.protection.outlook.com (mail-mw2nam12lp2045.outbound.protection.outlook.com [104.47.66.45]) by aserp3030.oracle.com with ESMTP id 3ekwwaxjw8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 07 Mar 2022 12:25:12 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=GSqKIa0FFVA9/ZDmacFc/qsUo/OYv+8PmiTOhGAjUocLlRBZ6bdWTXZJDli/4WGm0/jb9ecLOZKrtqisa0PvtVPqBesd1oLLFhstwFEkKcv8pxFfMbk0CLK800GafzvOeX7NtWv/4rA4A4vy+dHxHJcuyEecidn7x2uP2IpVxdc0zoWQBhXDhPVhEtJrulObdN4R5C7c6DT0g2VURsc8OYrrQfsJR+z/IHZZm7vfUQnMi0Z3iAhFkvXBANecIfmzsBn/F5JcQcUkuF8WlJsD1a5+IrVe2JTsyGDgwYo5RHAjg8k3da27jDjIvMS0Bj/PU+YbSLIOLOOSbnkBlunKsg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=tSZQk7HtTV6HKgfK4W2uW+38Fl91pLiEqT8Zv3n42NA=; b=TFviFZqk6GMC/BlvPrVOPyHo5qJP4mAeVw44E2uqcnnjZ9b2Kpb9WgsVDvah+inW0O3+HK+2gql37TqKfB5wyNcb9ksIljwUPFYPhXYBSTStnQcB6fiLojNSy9x1vH9B7V/+t3z+zBt8xX0FqIuoyBdA3qgJlQTX1J61r3HMTNX6sfBibqa4ri36Vmr7nbFXXTIAyc+7jbnsb7iHNF8krTR20AoOV2iC9shZOlIlf8ObNb9UNQZG1j5SoWEKqMTiIH+w9EirCkRgiEVa3t4P6OIbKXWpUU25WuInpTvjASITnOMUTzdLpwHQwoGWhLBbNHqJDprHRFMaYgIde9BD6A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=tSZQk7HtTV6HKgfK4W2uW+38Fl91pLiEqT8Zv3n42NA=; b=DUwwB7XIQi6r5HaTZjK0pAtUEBtHIbI87mrWSDgJCMHKDXUuZQ5i9bA4qfmagSd/v05MCIztne65Y007SirrkH+Ll+840R3qWojlZxdKXPZdHTSf8Ri+n2CYMLAIQxUazQzMuNEErncWSZFPJ21HywTwDVs2mt27WDQcr/+udxo= Received: from BLAPR10MB4835.namprd10.prod.outlook.com (2603:10b6:208:331::11) by DM6PR10MB3451.namprd10.prod.outlook.com (2603:10b6:5:61::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5038.15; Mon, 7 Mar 2022 12:25:10 +0000 Received: from BLAPR10MB4835.namprd10.prod.outlook.com ([fe80::750f:bf1d:1599:3406]) by BLAPR10MB4835.namprd10.prod.outlook.com ([fe80::750f:bf1d:1599:3406%5]) with mapi id 15.20.5038.027; Mon, 7 Mar 2022 12:25:10 +0000 From: Joao Martins To: linux-mm@kvack.org Cc: Dan Williams , Vishal Verma , Matthew Wilcox , Jason Gunthorpe , Jane Chu , Muchun Song , Mike Kravetz , Andrew Morton , Jonathan Corbet , Christoph Hellwig , nvdimm@lists.linux.dev, linux-doc@vger.kernel.org, Joao Martins Subject: [PATCH v8 1/5] mm/sparse-vmemmap: add a pgmap argument to section activation Date: Mon, 7 Mar 2022 12:24:53 +0000 Message-Id: <20220307122457.10066-2-joao.m.martins@oracle.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20220307122457.10066-1-joao.m.martins@oracle.com> References: <20220307122457.10066-1-joao.m.martins@oracle.com> X-ClientProxiedBy: LO4P265CA0019.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:2ae::21) To BLAPR10MB4835.namprd10.prod.outlook.com (2603:10b6:208:331::11) Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 453137de-a815-4214-97b7-08da003585ae X-MS-TrafficTypeDiagnostic: DM6PR10MB3451:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: LHKmse3EH5CEQo+W3DlkLLeBY0YzFy/2+aYmGAkeGzsWTEpbgKqBhi5hiLDc3TXdd/xMo0acPXpGQbB0XyTRi/qvWZsu2oNJ27n5nn5Wv0y/ChRIbVTgx44CC1DSvN4SwzSS3a19ZKXUPgSnfTbVnFNrZioPqjvwXsfJeAiHZqWa1WsCdzW7NGUdm8CDxvEbrlT1sxfyyvLfh8dk4Zz8Ux+HBjdAJkVTbjrK25P5j4nzpiM8aU+NNELTD0ai+2b3hPN00lilHJ2F9gdTp4lprHgMlSzCtVECAPS8atrQcrLN72vJl6eyWZj7BWAJX3jn+QMgjZa+UWq0lldVB7PDqmwAYyKX7i38MrPCJaqlFVu0dyxwLEDEMYB3ixSd9vy3POI21wwT73WTdoq6iKl1j5+6IPiZhUgG4RHC812Jyk7b4DS1IHErYJCHneIiFzvFBwIH/38wSG0DKAEixKNRtRRT5HYgiwY1seRWWv9wAIjcfRJTX9kpCDOBYLXnTdoYcAaFyHly4FSHRBGxBdGtH013kbOUQOzAgFQngipamaU/5mPlZfnrkZ0Klj4xs2AEYgssev6jDmIe2Nl7rNOTMPzP1amtZOtOkpkXCa6Q1j3bFRQafXtDpeNQgg58K7Nh2zYqvp4dz0asof+DaJxQZgbZD6d9s5uN6tQ/mnf34UDQ5BmoB+KgbJgf0UuMYeNdZUyIpKiW1guETw9vC6mGDqIhTo36z2F/IuMQOdHcl00Kin+YItArmdLZ1w12M5RrJR66xgLV7VcmooFS0LekZRPv9vWMBNTSemSDXrqWHoU= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB4835.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(966005)(6486002)(26005)(508600001)(6666004)(1076003)(107886003)(2616005)(186003)(6512007)(52116002)(86362001)(6506007)(83380400001)(38100700002)(8936002)(103116003)(5660300002)(7416002)(66946007)(8676002)(66476007)(66556008)(4326008)(38350700002)(36756003)(2906002)(54906003)(6916009)(316002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: Tz6P0E7xnglvMns28ZcPl7aKqKKb94eCmKBImCTNRlxN+sxOI0APBRflCHpRdyyQyqbNxCL7XLWWcmnMZdu9m5xQJQzACd+pmMK3wRPfRWaTFemtgTw6lT9brIxyxYJu6qpXSyFcJpDy3GScchvRjJiNnFE94vnwcGlsXJrQ4BN2Bk0xyT/T6k3BYCOLq+195GEEuZjH/RhWvlXKfC5pKDFsRQG3fQqYtcD4kmrB0gH+cpvq42qpNnJYJ/fpWZ2+fsNACaT/B1p1jp83JeUesdl+88Z19qtq4WQ4jI2puX8q0lWcHecoQgWhfj/vCBANKdWOSktoH+81Z1pvxp9ZS1Fg50uk2ij2VpVXHzl5H6QpJfW7lapfAi6OxMivx3bL8zx42rGq/tuCvVO3cTk6IYPgsNubA3qQO/icer1uUW47vG8CpgmLsr69qRON2rvAVlgda7ACIP0q8zEQJMJrGMwVW1tIXLOSAVIBsTPXYJ6Iew+YscKzwVE5uV4a/I5N7KDSSAp5VRLbzaGUWc3PCZ4MUu5Vhxs0Z5Yj5Wcqd7Sb6YjGq+5UGcvyHUlrOnKaASuh3qnJelpyJYJQ5xpJhYgUS5hSiWiqiNXeqFOUF6bbvVFFyru3g7wKi7q/jkJ2msmOyn5mDL3BtcaXbAaQykjDifACeCteutgibXsiCmCKxiATDduO0zgXC01a7smEBigfXVPruGciBAjb6XtmKbH75VVDDfuGt7ZD7f0LHbAg/5e6xA5C8vCvvf5ejApyDqggzimG4+l4uks0d+S4wUg+yrksdumkA1H51Hs+OAH1UmEi7AM39j1spPj2SqdTzojfzztJkHiFquzy3Y3fV9tr35RqtzJRDKgOJqVs1ZGH62hWqJ8DdEUwWD4i9XmOCM9/p2lCzsRNiQDgg0404AjxkxtzIIZ3j9Bh7rvjRGdJILCqhz8GnIJ1s1dlcDP8zQXTsTwdtbc4Hnvy4W057EltgPSiMwalMm+rh7CmWE1V1ccxZ4XAZHJvYttqAbSZY1drRopaSOOql0vabDE9S3J7oEFmeF3SOpWGlC15unbFNQ7fgnFnI77QDmNy0XbA8PWtP6XlL0JFpkTw5uuNhunqwKrcvprnephF+rPDdqN/gn82uqDZRwNpWRg9kd14ksA+bF0tXcDzIeNd3eN7/FbELvH/VdT0phdP4zLX/KWoLYOcMXYwwtMmhj3a5g6A5q+LmBV2JH7CpEo58oAxOSkYHeuCF3eX6aJh83KM0BcbzXVuKnp8yJHyVkV7U45InmvzloDaNxntpyqjZ6y34s0YW+aio+W8HlV6vnHRFkbhidKscwDUBr4BjcYbJ9VlMHXc2SReE/SIB7xRs6g8h4CEs5OPaZuY1OL3BMxSwK1iShsziaBvo3HGEb2P88Xc9LItCxLaW8/HHdRvJoEoe4nmbxFTtQBmv/Aq7YFsMvzesbY8HoquYDj7fTS+wcP1sEkZInckbhEfe6yg5nNffTRKpnC8cmaUuxoxcOJwucha9vTf/AoSdL248Hou8EoPdOCiBrAuhMLGvtpk73+AV+opGVUiqdkQ+25pmnweFRXxrkVROijy2BCAUd7/ghbgqSwwQwcIVdFMfONJkkvXJjoB17ITB7pn7Qn1N4bBj4JPU6x3fWuws8CWclvEJbTPw/YhwCcwB2SiqqdE0YmckQ== X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: 453137de-a815-4214-97b7-08da003585ae X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB4835.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Mar 2022 12:25:10.4612 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: XaV/kViLZGc0rP0GY+QoKV0JfmJ/1iaB5Sax0I4aarGS1bnRXREzdRzeCnNfWiOKOCzWozmpAmKt/8Ln+GnB0sPLz9XU2XnNY2B9UmO97L4= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR10MB3451 X-Proofpoint-Virus-Version: vendor=nai engine=6300 definitions=10278 signatures=690470 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 spamscore=0 phishscore=0 bulkscore=0 adultscore=0 malwarescore=0 suspectscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2202240000 definitions=main-2203070071 X-Proofpoint-ORIG-GUID: 8ECPqASM1YOu7hlTwp8pQ9521fPll304 X-Proofpoint-GUID: 8ECPqASM1YOu7hlTwp8pQ9521fPll304 In support of using compound pages for devmap mappings, plumb the pgmap down to the vmemmap_populate implementation. Note that while altmap is retrievable from pgmap the memory hotplug code passes altmap without pgmap[*], so both need to be independently plumbed. So in addition to @altmap, pass @pgmap to sparse section populate functions namely: sparse_add_section section_activate populate_section_memmap __populate_section_memmap Passing @pgmap allows __populate_section_memmap() to both fetch the vmemmap_shift in which memmap metadata is created for and also to let sparse-vmemmap fetch pgmap ranges to co-relate to a given section and pick whether to just reuse tail pages from past onlined sections. While at it, fix the kdoc for @altmap for sparse_add_section(). [*] https://lore.kernel.org/linux-mm/20210319092635.6214-1-osalvador@suse.de/ Signed-off-by: Joao Martins Reviewed-by: Dan Williams Reviewed-by: Muchun Song --- include/linux/memory_hotplug.h | 5 ++++- include/linux/mm.h | 3 ++- mm/memory_hotplug.c | 3 ++- mm/sparse-vmemmap.c | 3 ++- mm/sparse.c | 26 ++++++++++++++++---------- 5 files changed, 26 insertions(+), 14 deletions(-) diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 1ce6f8044f1e..e0b2209ab71c 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -15,6 +15,7 @@ struct memory_block; struct memory_group; struct resource; struct vmem_altmap; +struct dev_pagemap; #ifdef CONFIG_HAVE_ARCH_NODEDATA_EXTENSION /* @@ -122,6 +123,7 @@ typedef int __bitwise mhp_t; struct mhp_params { struct vmem_altmap *altmap; pgprot_t pgprot; + struct dev_pagemap *pgmap; }; bool mhp_range_allowed(u64 start, u64 size, bool need_mapping); @@ -333,7 +335,8 @@ extern void remove_pfn_range_from_zone(struct zone *zone, unsigned long nr_pages); extern bool is_memblock_offlined(struct memory_block *mem); extern int sparse_add_section(int nid, unsigned long pfn, - unsigned long nr_pages, struct vmem_altmap *altmap); + unsigned long nr_pages, struct vmem_altmap *altmap, + struct dev_pagemap *pgmap); extern void sparse_remove_section(struct mem_section *ms, unsigned long pfn, unsigned long nr_pages, unsigned long map_offset, struct vmem_altmap *altmap); diff --git a/include/linux/mm.h b/include/linux/mm.h index 49692a64d645..5f549cf6a4e8 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3111,7 +3111,8 @@ int vmemmap_remap_alloc(unsigned long start, unsigned long end, void *sparse_buffer_alloc(unsigned long size); struct page * __populate_section_memmap(unsigned long pfn, - unsigned long nr_pages, int nid, struct vmem_altmap *altmap); + unsigned long nr_pages, int nid, struct vmem_altmap *altmap, + struct dev_pagemap *pgmap); pgd_t *vmemmap_pgd_populate(unsigned long addr, int node); p4d_t *vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node); pud_t *vmemmap_pud_populate(p4d_t *p4d, unsigned long addr, int node); diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index aee69281dad6..2cc1c49a2be6 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -328,7 +328,8 @@ int __ref __add_pages(int nid, unsigned long pfn, unsigned long nr_pages, /* Select all remaining pages up to the next section boundary */ cur_nr_pages = min(end_pfn - pfn, SECTION_ALIGN_UP(pfn + 1) - pfn); - err = sparse_add_section(nid, pfn, cur_nr_pages, altmap); + err = sparse_add_section(nid, pfn, cur_nr_pages, altmap, + params->pgmap); if (err) break; cond_resched(); diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 8aecd6b3896c..c506f77cff23 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -641,7 +641,8 @@ int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end, } struct page * __meminit __populate_section_memmap(unsigned long pfn, - unsigned long nr_pages, int nid, struct vmem_altmap *altmap) + unsigned long nr_pages, int nid, struct vmem_altmap *altmap, + struct dev_pagemap *pgmap) { unsigned long start = (unsigned long) pfn_to_page(pfn); unsigned long end = start + nr_pages * sizeof(struct page); diff --git a/mm/sparse.c b/mm/sparse.c index 952f06d8f373..d2d76d158b39 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -427,7 +427,8 @@ static unsigned long __init section_map_size(void) } struct page __init *__populate_section_memmap(unsigned long pfn, - unsigned long nr_pages, int nid, struct vmem_altmap *altmap) + unsigned long nr_pages, int nid, struct vmem_altmap *altmap, + struct dev_pagemap *pgmap) { unsigned long size = section_map_size(); struct page *map = sparse_buffer_alloc(size); @@ -524,7 +525,7 @@ static void __init sparse_init_nid(int nid, unsigned long pnum_begin, break; map = __populate_section_memmap(pfn, PAGES_PER_SECTION, - nid, NULL); + nid, NULL, NULL); if (!map) { pr_err("%s: node[%d] memory map backing failed. Some memory will not be available.", __func__, nid); @@ -629,9 +630,10 @@ void offline_mem_sections(unsigned long start_pfn, unsigned long end_pfn) #ifdef CONFIG_SPARSEMEM_VMEMMAP static struct page * __meminit populate_section_memmap(unsigned long pfn, - unsigned long nr_pages, int nid, struct vmem_altmap *altmap) + unsigned long nr_pages, int nid, struct vmem_altmap *altmap, + struct dev_pagemap *pgmap) { - return __populate_section_memmap(pfn, nr_pages, nid, altmap); + return __populate_section_memmap(pfn, nr_pages, nid, altmap, pgmap); } static void depopulate_section_memmap(unsigned long pfn, unsigned long nr_pages, @@ -700,7 +702,8 @@ static int fill_subsection_map(unsigned long pfn, unsigned long nr_pages) } #else struct page * __meminit populate_section_memmap(unsigned long pfn, - unsigned long nr_pages, int nid, struct vmem_altmap *altmap) + unsigned long nr_pages, int nid, struct vmem_altmap *altmap, + struct dev_pagemap *pgmap) { return kvmalloc_node(array_size(sizeof(struct page), PAGES_PER_SECTION), GFP_KERNEL, nid); @@ -823,7 +826,8 @@ static void section_deactivate(unsigned long pfn, unsigned long nr_pages, } static struct page * __meminit section_activate(int nid, unsigned long pfn, - unsigned long nr_pages, struct vmem_altmap *altmap) + unsigned long nr_pages, struct vmem_altmap *altmap, + struct dev_pagemap *pgmap) { struct mem_section *ms = __pfn_to_section(pfn); struct mem_section_usage *usage = NULL; @@ -855,7 +859,7 @@ static struct page * __meminit section_activate(int nid, unsigned long pfn, if (nr_pages < PAGES_PER_SECTION && early_section(ms)) return pfn_to_page(pfn); - memmap = populate_section_memmap(pfn, nr_pages, nid, altmap); + memmap = populate_section_memmap(pfn, nr_pages, nid, altmap, pgmap); if (!memmap) { section_deactivate(pfn, nr_pages, altmap); return ERR_PTR(-ENOMEM); @@ -869,7 +873,8 @@ static struct page * __meminit section_activate(int nid, unsigned long pfn, * @nid: The node to add section on * @start_pfn: start pfn of the memory range * @nr_pages: number of pfns to add in the section - * @altmap: device page map + * @altmap: alternate pfns to allocate the memmap backing store + * @pgmap: alternate compound page geometry for devmap mappings * * This is only intended for hotplug. * @@ -883,7 +888,8 @@ static struct page * __meminit section_activate(int nid, unsigned long pfn, * * -ENOMEM - Out of memory. */ int __meminit sparse_add_section(int nid, unsigned long start_pfn, - unsigned long nr_pages, struct vmem_altmap *altmap) + unsigned long nr_pages, struct vmem_altmap *altmap, + struct dev_pagemap *pgmap) { unsigned long section_nr = pfn_to_section_nr(start_pfn); struct mem_section *ms; @@ -894,7 +900,7 @@ int __meminit sparse_add_section(int nid, unsigned long start_pfn, if (ret < 0) return ret; - memmap = section_activate(nid, start_pfn, nr_pages, altmap); + memmap = section_activate(nid, start_pfn, nr_pages, altmap, pgmap); if (IS_ERR(memmap)) return PTR_ERR(memmap); From patchwork Mon Mar 7 12:24:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joao Martins X-Patchwork-Id: 12771695 Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com [205.220.177.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DF43D3674 for ; Mon, 7 Mar 2022 12:25:39 +0000 (UTC) Received: from pps.filterd (m0246630.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 227C7jXc010192; Mon, 7 Mar 2022 12:25:15 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : content-type : mime-version; s=corp-2021-07-09; bh=1YUEoTnh1aGzrXHHQVADdNm+TBpzs27JjOXaNE3VmQU=; b=iRfMpUet9aAWXcG6ZDy60pda84eRpJHzo4+CDXRqAMGIpbw8GABmPhl2sQ9u4dv6pQc1 v+qqawlxoJQilfx2WHJ7Pkan+vgNF6CGmHp1Ja0f/5K+5NCDOJseAzxzPHF35zelf8H9 AKaejivWnTh0aw1WTbroXbyYdUaMJKvk5FcEROCd2dPVeUZojRH392RWY3qh8JX5v/pQ ALgJZjvfl5G7cZPvI0HIVLkcjRdZ+AJT7W3NKVJsoOh5NhXn1sBn6yX79fQdtoGL6MPR bKWrXbOQNgdVn+YBfP4CkVFGr34wtJjqF3lgEKIdbpwm+evh4FUau2g+SljJn3bw91/d Og== Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71]) by mx0b-00069f02.pphosted.com with ESMTP id 3ekx9cbkwj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 07 Mar 2022 12:25:15 +0000 Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1]) by aserp3030.oracle.com (8.16.1.2/8.16.1.2) with SMTP id 227CBYBn009524; Mon, 7 Mar 2022 12:25:14 GMT Received: from nam11-dm6-obe.outbound.protection.outlook.com (mail-dm6nam11lp2168.outbound.protection.outlook.com [104.47.57.168]) by aserp3030.oracle.com with ESMTP id 3ekwwaxjyg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 07 Mar 2022 12:25:14 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=IaoEkc/az9ZYIRmM+Mm8Y8E3JhtkygUcLKHApUDR6Z7u4Dpe7zthjYqJsX42elXCGiiVQ17C8DynBb1/G0Z1b2h96tpwMwkCtn5pgdDyJ4vqNn6//4PxWSpib8G2fI5C7rOAVcZpwHAX5odoBnQ9JpjOyqm0o3aEuTvNGBkwZJkCHN9Yq53chWgrVj0KbP4A652vK4cNDoQdY/u+T/7BVZ8KQvLQkXptWKX3fVgp1MPbKjpY6Ezh/XqzyNjcFgGtL3tpgjjU7S4PffBQ9Lb2wBUw94AC65A/8cpElv+T5FLbaVNshXRIKie7cLK0eX8hrFWS5BhpCtSlo/Eitx8UMA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=1YUEoTnh1aGzrXHHQVADdNm+TBpzs27JjOXaNE3VmQU=; b=dUn0SGrmopWDxfMx8K6T/yNxdBnmJULO1fLL+HbqsR3By7WWVvvZH8vI0aNEiYhUbK6zANxGb4l88FYfSxTd27gmIf9iyfSRZHZF/oRbIt8o2bFRu0yQT/rkuDP+yQGyvNQr+lhX3jkc9hs2nwWLOeM3S9iyPAeyw8okgAu9zVLUED1jGGNzKkmO54BFQknTHR+VPrRpK0n6WlrtUcE71WzGBZ+3vuRpNRVvIfEKeYSiZ4WFvYvCddd0OqYS1x+e29SVgMD5HnjdrK3zN2TD8ea3nMUFa6URr8RrnRcjdb/hpzennwh5zCkxosopTpChJizp8SXyF59prNgk6Y+OlQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=1YUEoTnh1aGzrXHHQVADdNm+TBpzs27JjOXaNE3VmQU=; b=bzn7BQUK7iBG8ryPjYFXUPgY3bUbYUn9O8cyxPhH+2rh/RdTB2TVUMA94k9/dti5nsLCmTuhFbppuOOlcw1XpadjRuLtLNUITolTxmQkSQCUDR4uWuxOI5wcOBhqXDgGHRZ4KByybX2NPasMq8CHpCeS1GQg+F1lSThKExmnMao= Received: from BLAPR10MB4835.namprd10.prod.outlook.com (2603:10b6:208:331::11) by DM5PR1001MB2284.namprd10.prod.outlook.com (2603:10b6:4:30::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5038.19; Mon, 7 Mar 2022 12:25:12 +0000 Received: from BLAPR10MB4835.namprd10.prod.outlook.com ([fe80::750f:bf1d:1599:3406]) by BLAPR10MB4835.namprd10.prod.outlook.com ([fe80::750f:bf1d:1599:3406%5]) with mapi id 15.20.5038.027; Mon, 7 Mar 2022 12:25:12 +0000 From: Joao Martins To: linux-mm@kvack.org Cc: Dan Williams , Vishal Verma , Matthew Wilcox , Jason Gunthorpe , Jane Chu , Muchun Song , Mike Kravetz , Andrew Morton , Jonathan Corbet , Christoph Hellwig , nvdimm@lists.linux.dev, linux-doc@vger.kernel.org, Joao Martins Subject: [PATCH v8 2/5] mm/sparse-vmemmap: refactor core of vmemmap_populate_basepages() to helper Date: Mon, 7 Mar 2022 12:24:54 +0000 Message-Id: <20220307122457.10066-3-joao.m.martins@oracle.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20220307122457.10066-1-joao.m.martins@oracle.com> References: <20220307122457.10066-1-joao.m.martins@oracle.com> X-ClientProxiedBy: LO4P265CA0019.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:2ae::21) To BLAPR10MB4835.namprd10.prod.outlook.com (2603:10b6:208:331::11) Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 815f346d-d073-49d7-7d55-08da0035871b X-MS-TrafficTypeDiagnostic: DM5PR1001MB2284:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: NsP7skPpsH/sEREpAA78dGQYXcz6L+r9MikFa/u+rWLXXv8AtV+J/IU/NNOJEiU+LevAcN/lS9YNPhhH3aIbSN7Vc7RbvnzJdouJ6eBQJ/sP5e0k1SrzqgApdhLrVt42ZTwqJYCH4IdQLQiASgbQBB6JgG93C1jVzl8tXcMmm3oV0DASq8yMYoNJb06Av26jLUz2lm6or3xfsINxlsFZst1SIKEuLVWAUDInv5PhcJ95U7N+jWFX120ft7VhAH12UnGILj2UDF7xVbGTibNXeKAnYFsp4HEQKfsuggeGh9n7hHyItXjhdk4P4KFBdIEiLWxk+Nd9PN5f/guPGOVzSNEq3ey3S42Jxvj0jtDSkuOOSteOo5bvMz2KsBKTOai4l4xlxKA9t89K4Rn24hUCqLzZYaHAhcIJNB6fCrfsMth184Vsc6Fvde2qxTOE3wZXmDad3wr80D+GBemo2ut/1J7Hxm+LCX9i0a09Y/woJAt+271QHekYg/6jf9AQw9z8bZdIqr4R8nW/d4E7z52s93QcnejQ86xpC4xRkUIIiJcGFJZ6Nsyb9rt14qaDd3ksiTk5hOK1qo9YXkcUZBSuBtPmQ6kV3NGV0IFBqL6BBrqjA7U0G5FzR51VQMjZOxK45vgje0R8CfUPjX6ls2VYV0jAksQZAKRn+a7kZCf+jEcQh7ny21XAArN5Ieer4dUC7ULq/iX3N2tTTpfNrBc2dA== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB4835.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(2906002)(4326008)(103116003)(316002)(7416002)(5660300002)(8676002)(6916009)(66946007)(54906003)(36756003)(8936002)(52116002)(6506007)(6666004)(6512007)(6486002)(508600001)(66476007)(66556008)(186003)(1076003)(38350700002)(26005)(107886003)(38100700002)(86362001)(2616005)(83380400001);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: ZBredoUx6NsBsVJG1iiiRv49yZbHiLPpghbdVy24Ktrc9n5XB446/FVeqb4eprCpv/sRSkXT6XIt802bhCtFraa200bd8vhyA9T9hVEXe9mWXAfV98v5vE7x2Ryj6RVVj7uojCG/VoqbW2xqGg5WK+llGIn8ndil96GxlVyC6qfXIOYwdZYeLbFAKxPBsXYZvNHDCWRd6+2NfrQWtPG0lcf+ebs67UrlOIzoXSgc0nj/9AVmKZ22AM2tM5TAs7UX9ncM/ecXO4WBJkbHLDguE5dt3npgs3GV2+VK+hkvq8mUOgHW/J5ypLgrv8ad+7ez4D0UogHk1oLt4CeeGRzP5Y3nlhXWTTBGxqbHx/7R6yrkTDsCQvTz7ZxlWw5/wxLbjxz4prIYeYF8Vd95aHl+YYrjNNiniJxBtBqF7D8IcobOckqYAQLah/t1bJeeDC8GJNh5wHX7ZSvjLpHa3PMPEOkOXsXGenpwSUDaRxbHoKf8PSav4F8J2+V6ceYwB4McEzK7vk7pW4gSC4hZeYnX4GZ2LJY1ud+sdvPp0Xpx8cEF9vUuH7AOxsi32n6GWEa5O5fMXeT9h2KmCIc7UzBrwq+1dLlAo1CYU4Yek6LkB7YtIt+/aJ2u52sK1kOMIqkHivaD+Mj3EipkLqOQ1MhlHTQ17t7K41TYOdeSAl0J9iPdxK6WrSqJvzND35Mw2cyjX4JzfAJHTecoCy6ja7gLOQMCPKH6RrPFbYF52bz7Nps0++MwaEWE0MFEU8IOF1qOoP78z6upICsLDIIZD14ba4R+f4aDT8OfpAG9D3GicBdGhGccW+d6gDasoChsoDu8OBkG4K0ksAQXHu1u8QtQA/LAuOom5btJsAIjsAB/DcmbY5i8mH8JLxfMqsNWVEINl704ahkZxCN83yDD98GMCBvP7tHeJuaKiAWg+67NPkG4qVzfKNbsjJhlxqQbMpRfjfH8wpwoyGJx7WNtKXQMDGb3gktSyo51//RmaZDIi4224LjM9mVUjmiL4vRiKA/bq14+/ZqdZvUEc+Z9W/5LbvbK5PsMpXGZCXN0s7c1SyGfOC/rAcPHnrF0ByL1y/UH/UoTdL8CVPTPm7G21gzvZ0S/TvssBZ8vIPWB+aJvRoo9UrJvz+I2CWiIyUundBaTlE7OV5KV49KqCrH42xNrDKf9Me1h42Nri1IBaBLF/UcEpL74vnqHtUUAmGPFg8jS1w/iwiskFitmnFYkfT24+hlm4I8cls6UL0Tn1uk+luNoncgqHEGkxjgnCY31Y5cCme0R/rixXVj6eo5McAOiPIivaJECL2lvZeln0DSaskjrpzmzpa6TmQqzHnnbDbX5PidDwd7qWI5ncm6LP1wWw/kG9iMNj7I6hRg7Yl/iQCGFK/4MLUxxe9ZyYGQi9oo0T4FIAw87nFftCt/y5kzbFgRSTqIxKNG3bdSZYAdGmsGSeOu9Dsl6TuE/deKTBun1F5DdMLtgDOOFiq1E35ubi0czKk4gdxirQpzOERIZpjhCZVBzKaIlcIZRpP2Z2z+NSekXhQC4VnL7OWRhJzNa5ScdrPlzfLs+gJaHiFDr2BdIScDj9yt34MkphVB1Y4mZInIcBBLVWhTrwOAK98kz9kxXcJ+Xucbr5HdoSGJLIvNqS8+LAa8jq0hyoZZqU7ISQsQqcs6yh7V8EV8L2nkJ4w== X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: 815f346d-d073-49d7-7d55-08da0035871b X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB4835.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Mar 2022 12:25:12.8057 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 8WjE11Gv+awbzsxFK07hC4aaTeKquxS12T1ucCZ91+iz0Wlmct/1ai6owDgVhxcuyqynm+KYCp0cr9XvwV14q6VZxLFhr9tmgB2bTpmEGlg= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR1001MB2284 X-Proofpoint-Virus-Version: vendor=nai engine=6300 definitions=10278 signatures=690470 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=842 spamscore=0 phishscore=0 bulkscore=0 adultscore=0 malwarescore=0 suspectscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2202240000 definitions=main-2203070071 X-Proofpoint-ORIG-GUID: YIYzdHjbGz2JlDvW3bD98OPa6S3vsLeZ X-Proofpoint-GUID: YIYzdHjbGz2JlDvW3bD98OPa6S3vsLeZ In preparation for describing a memmap with compound pages, move the actual pte population logic into a separate function vmemmap_populate_address() and have a new helper vmemmap_populate_range() walk through all base pages it needs to populate. While doing that, change the helper to use a pte_t* as return value, rather than an hardcoded errno of 0 or -ENOMEM. Signed-off-by: Joao Martins Reviewed-by: Muchun Song --- mm/sparse-vmemmap.c | 53 ++++++++++++++++++++++++++++++--------------- 1 file changed, 36 insertions(+), 17 deletions(-) diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index c506f77cff23..1b30a82f285e 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -608,38 +608,57 @@ pgd_t * __meminit vmemmap_pgd_populate(unsigned long addr, int node) return pgd; } -int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end, - int node, struct vmem_altmap *altmap) +static pte_t * __meminit vmemmap_populate_address(unsigned long addr, int node, + struct vmem_altmap *altmap) { - unsigned long addr = start; pgd_t *pgd; p4d_t *p4d; pud_t *pud; pmd_t *pmd; pte_t *pte; + pgd = vmemmap_pgd_populate(addr, node); + if (!pgd) + return NULL; + p4d = vmemmap_p4d_populate(pgd, addr, node); + if (!p4d) + return NULL; + pud = vmemmap_pud_populate(p4d, addr, node); + if (!pud) + return NULL; + pmd = vmemmap_pmd_populate(pud, addr, node); + if (!pmd) + return NULL; + pte = vmemmap_pte_populate(pmd, addr, node, altmap); + if (!pte) + return NULL; + vmemmap_verify(pte, node, addr, addr + PAGE_SIZE); + + return pte; +} + +static int __meminit vmemmap_populate_range(unsigned long start, + unsigned long end, int node, + struct vmem_altmap *altmap) +{ + unsigned long addr = start; + pte_t *pte; + for (; addr < end; addr += PAGE_SIZE) { - pgd = vmemmap_pgd_populate(addr, node); - if (!pgd) - return -ENOMEM; - p4d = vmemmap_p4d_populate(pgd, addr, node); - if (!p4d) - return -ENOMEM; - pud = vmemmap_pud_populate(p4d, addr, node); - if (!pud) - return -ENOMEM; - pmd = vmemmap_pmd_populate(pud, addr, node); - if (!pmd) - return -ENOMEM; - pte = vmemmap_pte_populate(pmd, addr, node, altmap); + pte = vmemmap_populate_address(addr, node, altmap); if (!pte) return -ENOMEM; - vmemmap_verify(pte, node, addr, addr + PAGE_SIZE); } return 0; } +int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end, + int node, struct vmem_altmap *altmap) +{ + return vmemmap_populate_range(start, end, node, altmap); +} + struct page * __meminit __populate_section_memmap(unsigned long pfn, unsigned long nr_pages, int nid, struct vmem_altmap *altmap, struct dev_pagemap *pgmap) From patchwork Mon Mar 7 12:24:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joao Martins X-Patchwork-Id: 12771698 Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com [205.220.177.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EDC193B38 for ; Mon, 7 Mar 2022 12:25:39 +0000 (UTC) Received: from pps.filterd (m0246632.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 227Btv9U028756; Mon, 7 Mar 2022 12:25:19 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : content-type : mime-version; s=corp-2021-07-09; bh=9NzCFt1Ozy2mhawUz8VK277naG0+CSyaJzZL6gsG9F0=; b=BzRK7aNG02zS2EilykdZGa1YCqW696dyNYTpuH+Sck2gCoDCsizj6ORdddAnSetXfYOP pFQETYKpvJ7qesbN1K2LIE1QERnVh7FEBAHmNzNmaUnTG/j24uLpqcysrLW42ukhQBll VPmlcRIXNK8B7woBIQhNYx35PSsTkeXfaQ7tW6uYAvRNN8F8Gc6FyszA1frinOhnMFMv Zia8MGM8wwQE52bxUpC5LkZuGR6aQYT7VLQaTqIOEKTyDfuT3KNeMat5ymQME+z9wvyJ 7OjIWOuL5OXezzS4TcElmNXOjnoSRfp4jMacUdRePcj85iqP3d/nUsmGxgo3pDK9qgtG rA== Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79]) by mx0b-00069f02.pphosted.com with ESMTP id 3ekyfsbkc2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 07 Mar 2022 12:25:18 +0000 Received: from pps.filterd (userp3020.oracle.com [127.0.0.1]) by userp3020.oracle.com (8.16.1.2/8.16.1.2) with SMTP id 227CBrRl143497; Mon, 7 Mar 2022 12:25:17 GMT Received: from nam11-dm6-obe.outbound.protection.outlook.com (mail-dm6nam11lp2173.outbound.protection.outlook.com [104.47.57.173]) by userp3020.oracle.com with ESMTP id 3em1ahx9b3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 07 Mar 2022 12:25:17 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=bFknRgvnz25DmtfMbUg9TtwVHBqs2n/3ouR0AUQiyshku3yGIjbfWDR+ovC55dnfs2qwf4kiAkDA0K3ar4SpRfr+gy18d28EWD8O+AICuS0FCmt1N8KpAG7diXKVWru0cpH3lV8EiaxB0ZEsR7tn9nvRwx+z8SA2zR7slNNUucZobgtCXws5k9ygWgdSLUKY0hDgOUEZVCGA16k8slK/LQFnyUqObrzEx2gtA+TOh1Z1LEU3J0nDXfguoeVXGSYsCB6robYaj6K4gtyWmnB73FB6qqu3yS8me5zIEYaME2Oeu9d/dfVn9R9RyA63BZOoqfpKhvU/592tNAJ/86/8UQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=9NzCFt1Ozy2mhawUz8VK277naG0+CSyaJzZL6gsG9F0=; b=AEZ6vX7VmuM6QYMKGwAeToVuS6uxIr+HCiOyhC5cdNa09lF6DxvtqceRzkTe3VQnGWnDoTnAPnwdN8qX5M9qlXU6w2FUZ39A+bH8H5o8uvbVqFfeSHijZLkEV8n41FumWNUGGctIDIoc1H+lsyyMZmA4b53iZmY8qsvNuDyQiSRpmX9926JS7Mjv1AG5spgZPEfL4uGldTBjYm96EkHqQqjUYeEcllWv7uMaQSolxwcygLJ6YrikeIgP/7ymakhiU1LGDUVzf28LuuyCd+257zT6UePlEAkIom3Od+r9bW/w8s/BybEFZpcUE63UNGH/s46wo4Cq9XnDIQONwJbj7Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=9NzCFt1Ozy2mhawUz8VK277naG0+CSyaJzZL6gsG9F0=; b=n1Fw+jraVA7RPLbGWr9z5f2h9C7EudImWh+PdlnH7dD+hdlIURlZQGth+OQJRJlltTuetUj4B+TCiCodfft2d4Yevzf4g4U5n6ezpV332xvHxcARv4PZBtG7Vd7B+bGNOoxf52bNDUSRATiPOP9F2RbHDQng2DJLLjUDBaDq2IU= Received: from BLAPR10MB4835.namprd10.prod.outlook.com (2603:10b6:208:331::11) by DM5PR1001MB2284.namprd10.prod.outlook.com (2603:10b6:4:30::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5038.19; Mon, 7 Mar 2022 12:25:15 +0000 Received: from BLAPR10MB4835.namprd10.prod.outlook.com ([fe80::750f:bf1d:1599:3406]) by BLAPR10MB4835.namprd10.prod.outlook.com ([fe80::750f:bf1d:1599:3406%5]) with mapi id 15.20.5038.027; Mon, 7 Mar 2022 12:25:15 +0000 From: Joao Martins To: linux-mm@kvack.org Cc: Dan Williams , Vishal Verma , Matthew Wilcox , Jason Gunthorpe , Jane Chu , Muchun Song , Mike Kravetz , Andrew Morton , Jonathan Corbet , Christoph Hellwig , nvdimm@lists.linux.dev, linux-doc@vger.kernel.org, Joao Martins Subject: [PATCH v8 3/5] mm/hugetlb_vmemmap: move comment block to Documentation/vm Date: Mon, 7 Mar 2022 12:24:55 +0000 Message-Id: <20220307122457.10066-4-joao.m.martins@oracle.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20220307122457.10066-1-joao.m.martins@oracle.com> References: <20220307122457.10066-1-joao.m.martins@oracle.com> X-ClientProxiedBy: LO4P265CA0019.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:2ae::21) To BLAPR10MB4835.namprd10.prod.outlook.com (2603:10b6:208:331::11) Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: cf492a30-26e4-433e-2133-08da00358894 X-MS-TrafficTypeDiagnostic: DM5PR1001MB2284:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: SMt1wePKKoXjAdiPZjI9fARNTi88BwsO44ppLA/nCneujoQgnY1XMLvdxSwop43Y8Pz63/V4ybY8bp5EC6gbC1FArB13XUFK7tldexKFiurmhgrkvGD6YeAqdOc7ukQIAeU0vsG8ryA3UoaGnxlGGM0Hw7of+7tyyJLp/X0OzdePd5z3zO/woPRReCaHD/ZIimuv3Jl6CDWVjoeYGIoFZT3GpfH1Gr27DATSSB5zeWZ23s5k8spSG8Dck25yH4wgtHKpK/5j9QeJhcvsDpa6F+8Wf8lv5/ZJAsVZAs5Z/+FboBTV22WYHkVWny5hMMqcFZZvQp84fxBaAYlAWnX8toKGLfl5pF5khImuWZh+zyvoDmiJU287Wwka4wx6uCu0wtF9Xgzc3cx3Z6iRBVeA0mKeDeToHQ1VsFMcg1y+jzL05Q6IYXN3Qspa6GzGDkEv3kBxDHpqw5DGLDYHI9lrq1PwMC/OH4G6s7/VeUW5SMcR3T4sK+HdTpFNeT7oMDl7VFjQncm8kIm2qxs26XTjhBdDYlCIVs0BTLcGxmu8U5Nn7FGlKQ7n/JUPHMfXIQYM7uBkhNtHTCcyY2tIjP3Eu6cS4NdnQgwWEhsC2kVmuG9ZQilgY18DDpBQFdKELZv7vas/WnMYCG+NxkFUG3I8ZoT3/nzXOdQoWiMLKhqDphOhocwJk+ARUJYAXDwtR0BP7WciWlCe0+UU8FP+sKYi6g== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB4835.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(2906002)(4326008)(103116003)(316002)(30864003)(7416002)(5660300002)(8676002)(6916009)(66946007)(54906003)(36756003)(8936002)(52116002)(6506007)(6666004)(6512007)(6486002)(508600001)(66476007)(66556008)(186003)(1076003)(38350700002)(26005)(107886003)(38100700002)(86362001)(2616005)(83380400001);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: //7kRJ9Pm+lnxoH9fMOpGgrXXsY+PNiBj/9KKlq46Ut0wDgqmh273OiZKHi/zmUXb60oUAhizMFWc6FWB3sG0HCY2jjAiD3VZnDX521dqMkRJAG9UQ4NfwsDg5+xNvq4Q9L00fK5Uoi1Pe1kXcYup13IEP/7bPRcoAa9n3j9s0Fqw8yxzptaqOJBNy//uuTKjpQKmpA3UKaGDR5vZPUPlH22el+3mvzsjuRhd4p57J9tCnvNxRhdy5g5VGjvM9n7Yd7Qvs8yVch2E252teWsFrpREaY26zcDn17BEq9w4th8muTClC92T+38SYY8q08lqgP9YWA+tDNu0M1JwAEIdMAY28yATuAaP/+CIbpV17Rr2J8geI8s/9ICXFOmsPXPHni97PBXXDvG8j0AuWTOBjF3Qajr0ldhZFcterSN8/jtbmXvqsJ3QAcglezrGXR+D4j6KN5QuTOuYjqYd3/psWTotu1+XU0hpUojbzoqr97kTj26YcCUyh5G4FZh7FxBAJMefxuEGpzalrP0xKBZ+J6SahFI8Zt6tuRp1NBckq02+CogBbfB7VGKgzlfTyXDVGouKtx/sHYI5oLUuxZhCeWRxFAtaS13O2S032DP599k7d5+kpjXH+ErkUqYnnScT5C2ZgSNCksqIGVyw3iTiYqIGrMb8k94c8mPecuWHTczbNIg2jdrPNVVeun5TU+W3WCBvifDDwRPSgTKedFkFltwAU0iL4kApb71uaFUMJKAZC7/IGPb0pPDDFrmTyJipJDe6hT8rFZrld9jYo5JZ8bqxM5/J7M0PoQ23fCXn0IKaR4cPT0pS9AsJ/TeotArG0fjM4hD891qmVbHq/mZF2uFPZnlLdu98m3CfBLyzfvbomvcQ1ZoY7HjMxlFfKuDNv3GP4YU8CwRPxb5bjW8dC3PxbfIMvi48RTEWIjmwbFpRo33AayrPF6CdqEqBh+Qtc7JLJwwclth+TLePi8x75j2ttMD6WmFE1pBbNLMH1bcZO17vWiF41bSBhULK6jG9LafQaH42cYTmd2JTav1EVRIz3sEnIkce4mzn1bj22n0EPnwJlIISBmJkYn4Jqrgc8nf5omSEZ66kcpfv9V005zujtPQXixZCc7wbeVNRd08mbBEXrW1p6DSH+Ftu63dIOrtzpV56KNm/RVzYZX1pTTVtCUNVTZyF2ZMxMgfFk3wLSMv4ut7AbF35N/m7fmIIzFnMSI1VlORlNx0bprk7/egTLzYu3fNqmzsEqzHnrs+VWttzP477MtJdyLIGGtcI4zZbWJqVyyFDiFmsXPkwGp2QZWdBP2ZYn/Euq98ajM7dX8qMO98P1m2BTh91TngWPyxD7VZZ5yEwwIRsAF+Z1CZr6/Xk3ND+/YTOdO7VRTWS8fsSPZdV1hAUrCGZJGGDmgUk4e/4XVNoGUfeX+zMX4bQMjywQUp8EgCGy0pT+EWSJUnfHjh6Z6gMcS42A9ZjH27xOqzgQT2diReyPw9i++EX37yUmC9F6RTxh1DzPAXXiPFxFr1qqyVnwPxWsN6CWxc5mdiLWgjoV5gbeuX4kVSQ5/IILisOpb2y2evaKkVs8KLDc9ZKnaHLjM04lAFJ9pF2A6G8W+sOtqlvKY6+wSnGE8+X4BA/o5PCwLxdUjJdlW08Tn3ACX2NRk3w6NoQBp/ppWYtLIuNwUPH6vWiw== X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: cf492a30-26e4-433e-2133-08da00358894 X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB4835.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Mar 2022 12:25:15.3209 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 2wx6rm9QbyWIGrGINod+RaOvF35EBX35giyMJfz1BqVN5hpzd2ELrolKMosWqb4gf19gFScml6aDs+7+UkyIhVoL5GiTHPIlKsUYA1dgNXM= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR1001MB2284 X-Proofpoint-Virus-Version: vendor=nai engine=6300 definitions=10278 signatures=690470 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 phishscore=0 bulkscore=0 malwarescore=0 spamscore=0 adultscore=0 mlxlogscore=999 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2202240000 definitions=main-2203070071 X-Proofpoint-GUID: OBS4tgCPlCzbeGSZKAL71KJbi73rAJiQ X-Proofpoint-ORIG-GUID: OBS4tgCPlCzbeGSZKAL71KJbi73rAJiQ In preparation for device-dax for using hugetlbfs compound page tail deduplication technique, move the comment block explanation into a common place in Documentation/vm. Cc: Muchun Song Cc: Mike Kravetz Suggested-by: Dan Williams Signed-off-by: Joao Martins Reviewed-by: Muchun Song Reviewed-by: Dan Williams --- Documentation/vm/index.rst | 1 + Documentation/vm/vmemmap_dedup.rst | 173 +++++++++++++++++++++++++++++ mm/hugetlb_vmemmap.c | 168 +--------------------------- 3 files changed, 175 insertions(+), 167 deletions(-) create mode 100644 Documentation/vm/vmemmap_dedup.rst diff --git a/Documentation/vm/index.rst b/Documentation/vm/index.rst index 44365c4574a3..2fb612bb72c9 100644 --- a/Documentation/vm/index.rst +++ b/Documentation/vm/index.rst @@ -37,5 +37,6 @@ algorithms. If you are looking for advice on simply allocating memory, see the transhuge unevictable-lru vmalloced-kernel-stacks + vmemmap_dedup z3fold zsmalloc diff --git a/Documentation/vm/vmemmap_dedup.rst b/Documentation/vm/vmemmap_dedup.rst new file mode 100644 index 000000000000..485ccf4f7b10 --- /dev/null +++ b/Documentation/vm/vmemmap_dedup.rst @@ -0,0 +1,173 @@ +.. SPDX-License-Identifier: GPL-2.0 + +================================== +Free some vmemmap pages of HugeTLB +================================== + +The struct page structures (page structs) are used to describe a physical +page frame. By default, there is a one-to-one mapping from a page frame to +it's corresponding page struct. + +HugeTLB pages consist of multiple base page size pages and is supported by many +architectures. See Documentation/admin-guide/mm/hugetlbpage.rst for more +details. On the x86-64 architecture, HugeTLB pages of size 2MB and 1GB are +currently supported. Since the base page size on x86 is 4KB, a 2MB HugeTLB page +consists of 512 base pages and a 1GB HugeTLB page consists of 4096 base pages. +For each base page, there is a corresponding page struct. + +Within the HugeTLB subsystem, only the first 4 page structs are used to +contain unique information about a HugeTLB page. __NR_USED_SUBPAGE provides +this upper limit. The only 'useful' information in the remaining page structs +is the compound_head field, and this field is the same for all tail pages. + +By removing redundant page structs for HugeTLB pages, memory can be returned +to the buddy allocator for other uses. + +Different architectures support different HugeTLB pages. For example, the +following table is the HugeTLB page size supported by x86 and arm64 +architectures. Because arm64 supports 4k, 16k, and 64k base pages and +supports contiguous entries, so it supports many kinds of sizes of HugeTLB +page. + ++--------------+-----------+-----------------------------------------------+ +| Architecture | Page Size | HugeTLB Page Size | ++--------------+-----------+-----------+-----------+-----------+-----------+ +| x86-64 | 4KB | 2MB | 1GB | | | ++--------------+-----------+-----------+-----------+-----------+-----------+ +| | 4KB | 64KB | 2MB | 32MB | 1GB | +| +-----------+-----------+-----------+-----------+-----------+ +| arm64 | 16KB | 2MB | 32MB | 1GB | | +| +-----------+-----------+-----------+-----------+-----------+ +| | 64KB | 2MB | 512MB | 16GB | | ++--------------+-----------+-----------+-----------+-----------+-----------+ + +When the system boot up, every HugeTLB page has more than one struct page +structs which size is (unit: pages):: + + struct_size = HugeTLB_Size / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE + +Where HugeTLB_Size is the size of the HugeTLB page. We know that the size +of the HugeTLB page is always n times PAGE_SIZE. So we can get the following +relationship:: + + HugeTLB_Size = n * PAGE_SIZE + +Then:: + + struct_size = n * PAGE_SIZE / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE + = n * sizeof(struct page) / PAGE_SIZE + +We can use huge mapping at the pud/pmd level for the HugeTLB page. + +For the HugeTLB page of the pmd level mapping, then:: + + struct_size = n * sizeof(struct page) / PAGE_SIZE + = PAGE_SIZE / sizeof(pte_t) * sizeof(struct page) / PAGE_SIZE + = sizeof(struct page) / sizeof(pte_t) + = 64 / 8 + = 8 (pages) + +Where n is how many pte entries which one page can contains. So the value of +n is (PAGE_SIZE / sizeof(pte_t)). + +This optimization only supports 64-bit system, so the value of sizeof(pte_t) +is 8. And this optimization also applicable only when the size of struct page +is a power of two. In most cases, the size of struct page is 64 bytes (e.g. +x86-64 and arm64). So if we use pmd level mapping for a HugeTLB page, the +size of struct page structs of it is 8 page frames which size depends on the +size of the base page. + +For the HugeTLB page of the pud level mapping, then:: + + struct_size = PAGE_SIZE / sizeof(pmd_t) * struct_size(pmd) + = PAGE_SIZE / 8 * 8 (pages) + = PAGE_SIZE (pages) + +Where the struct_size(pmd) is the size of the struct page structs of a +HugeTLB page of the pmd level mapping. + +E.g.: A 2MB HugeTLB page on x86_64 consists in 8 page frames while 1GB +HugeTLB page consists in 4096. + +Next, we take the pmd level mapping of the HugeTLB page as an example to +show the internal implementation of this optimization. There are 8 pages +struct page structs associated with a HugeTLB page which is pmd mapped. + +Here is how things look before optimization:: + + HugeTLB struct pages(8 pages) page frame(8 pages) + +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ + | | | 0 | -------------> | 0 | + | | +-----------+ +-----------+ + | | | 1 | -------------> | 1 | + | | +-----------+ +-----------+ + | | | 2 | -------------> | 2 | + | | +-----------+ +-----------+ + | | | 3 | -------------> | 3 | + | | +-----------+ +-----------+ + | | | 4 | -------------> | 4 | + | PMD | +-----------+ +-----------+ + | level | | 5 | -------------> | 5 | + | mapping | +-----------+ +-----------+ + | | | 6 | -------------> | 6 | + | | +-----------+ +-----------+ + | | | 7 | -------------> | 7 | + | | +-----------+ +-----------+ + | | + | | + | | + +-----------+ + +The value of page->compound_head is the same for all tail pages. The first +page of page structs (page 0) associated with the HugeTLB page contains the 4 +page structs necessary to describe the HugeTLB. The only use of the remaining +pages of page structs (page 1 to page 7) is to point to page->compound_head. +Therefore, we can remap pages 1 to 7 to page 0. Only 1 page of page structs +will be used for each HugeTLB page. This will allow us to free the remaining +7 pages to the buddy allocator. + +Here is how things look after remapping:: + + HugeTLB struct pages(8 pages) page frame(8 pages) + +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ + | | | 0 | -------------> | 0 | + | | +-----------+ +-----------+ + | | | 1 | ---------------^ ^ ^ ^ ^ ^ ^ + | | +-----------+ | | | | | | + | | | 2 | -----------------+ | | | | | + | | +-----------+ | | | | | + | | | 3 | -------------------+ | | | | + | | +-----------+ | | | | + | | | 4 | ---------------------+ | | | + | PMD | +-----------+ | | | + | level | | 5 | -----------------------+ | | + | mapping | +-----------+ | | + | | | 6 | -------------------------+ | + | | +-----------+ | + | | | 7 | ---------------------------+ + | | +-----------+ + | | + | | + | | + +-----------+ + +When a HugeTLB is freed to the buddy system, we should allocate 7 pages for +vmemmap pages and restore the previous mapping relationship. + +For the HugeTLB page of the pud level mapping. It is similar to the former. +We also can use this approach to free (PAGE_SIZE - 1) vmemmap pages. + +Apart from the HugeTLB page of the pmd/pud level mapping, some architectures +(e.g. aarch64) provides a contiguous bit in the translation table entries +that hints to the MMU to indicate that it is one of a contiguous set of +entries that can be cached in a single TLB entry. + +The contiguous bit is used to increase the mapping size at the pmd and pte +(last) level. So this type of HugeTLB page can be optimized only when its +size of the struct page structs is greater than 1 page. + +Notice: The head vmemmap page is not freed to the buddy allocator and all +tail vmemmap pages are mapped to the head vmemmap page frame. So we can see +more than one struct page struct with PG_head (e.g. 8 per 2 MB HugeTLB page) +associated with each HugeTLB page. The compound_head() can handle this +correctly (more details refer to the comment above compound_head()). diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 791626983c2e..dbaa837b19c6 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -6,173 +6,7 @@ * * Author: Muchun Song * - * The struct page structures (page structs) are used to describe a physical - * page frame. By default, there is a one-to-one mapping from a page frame to - * it's corresponding page struct. - * - * HugeTLB pages consist of multiple base page size pages and is supported by - * many architectures. See hugetlbpage.rst in the Documentation directory for - * more details. On the x86-64 architecture, HugeTLB pages of size 2MB and 1GB - * are currently supported. Since the base page size on x86 is 4KB, a 2MB - * HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of - * 4096 base pages. For each base page, there is a corresponding page struct. - * - * Within the HugeTLB subsystem, only the first 4 page structs are used to - * contain unique information about a HugeTLB page. __NR_USED_SUBPAGE provides - * this upper limit. The only 'useful' information in the remaining page structs - * is the compound_head field, and this field is the same for all tail pages. - * - * By removing redundant page structs for HugeTLB pages, memory can be returned - * to the buddy allocator for other uses. - * - * Different architectures support different HugeTLB pages. For example, the - * following table is the HugeTLB page size supported by x86 and arm64 - * architectures. Because arm64 supports 4k, 16k, and 64k base pages and - * supports contiguous entries, so it supports many kinds of sizes of HugeTLB - * page. - * - * +--------------+-----------+-----------------------------------------------+ - * | Architecture | Page Size | HugeTLB Page Size | - * +--------------+-----------+-----------+-----------+-----------+-----------+ - * | x86-64 | 4KB | 2MB | 1GB | | | - * +--------------+-----------+-----------+-----------+-----------+-----------+ - * | | 4KB | 64KB | 2MB | 32MB | 1GB | - * | +-----------+-----------+-----------+-----------+-----------+ - * | arm64 | 16KB | 2MB | 32MB | 1GB | | - * | +-----------+-----------+-----------+-----------+-----------+ - * | | 64KB | 2MB | 512MB | 16GB | | - * +--------------+-----------+-----------+-----------+-----------+-----------+ - * - * When the system boot up, every HugeTLB page has more than one struct page - * structs which size is (unit: pages): - * - * struct_size = HugeTLB_Size / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE - * - * Where HugeTLB_Size is the size of the HugeTLB page. We know that the size - * of the HugeTLB page is always n times PAGE_SIZE. So we can get the following - * relationship. - * - * HugeTLB_Size = n * PAGE_SIZE - * - * Then, - * - * struct_size = n * PAGE_SIZE / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE - * = n * sizeof(struct page) / PAGE_SIZE - * - * We can use huge mapping at the pud/pmd level for the HugeTLB page. - * - * For the HugeTLB page of the pmd level mapping, then - * - * struct_size = n * sizeof(struct page) / PAGE_SIZE - * = PAGE_SIZE / sizeof(pte_t) * sizeof(struct page) / PAGE_SIZE - * = sizeof(struct page) / sizeof(pte_t) - * = 64 / 8 - * = 8 (pages) - * - * Where n is how many pte entries which one page can contains. So the value of - * n is (PAGE_SIZE / sizeof(pte_t)). - * - * This optimization only supports 64-bit system, so the value of sizeof(pte_t) - * is 8. And this optimization also applicable only when the size of struct page - * is a power of two. In most cases, the size of struct page is 64 bytes (e.g. - * x86-64 and arm64). So if we use pmd level mapping for a HugeTLB page, the - * size of struct page structs of it is 8 page frames which size depends on the - * size of the base page. - * - * For the HugeTLB page of the pud level mapping, then - * - * struct_size = PAGE_SIZE / sizeof(pmd_t) * struct_size(pmd) - * = PAGE_SIZE / 8 * 8 (pages) - * = PAGE_SIZE (pages) - * - * Where the struct_size(pmd) is the size of the struct page structs of a - * HugeTLB page of the pmd level mapping. - * - * E.g.: A 2MB HugeTLB page on x86_64 consists in 8 page frames while 1GB - * HugeTLB page consists in 4096. - * - * Next, we take the pmd level mapping of the HugeTLB page as an example to - * show the internal implementation of this optimization. There are 8 pages - * struct page structs associated with a HugeTLB page which is pmd mapped. - * - * Here is how things look before optimization. - * - * HugeTLB struct pages(8 pages) page frame(8 pages) - * +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ - * | | | 0 | -------------> | 0 | - * | | +-----------+ +-----------+ - * | | | 1 | -------------> | 1 | - * | | +-----------+ +-----------+ - * | | | 2 | -------------> | 2 | - * | | +-----------+ +-----------+ - * | | | 3 | -------------> | 3 | - * | | +-----------+ +-----------+ - * | | | 4 | -------------> | 4 | - * | PMD | +-----------+ +-----------+ - * | level | | 5 | -------------> | 5 | - * | mapping | +-----------+ +-----------+ - * | | | 6 | -------------> | 6 | - * | | +-----------+ +-----------+ - * | | | 7 | -------------> | 7 | - * | | +-----------+ +-----------+ - * | | - * | | - * | | - * +-----------+ - * - * The value of page->compound_head is the same for all tail pages. The first - * page of page structs (page 0) associated with the HugeTLB page contains the 4 - * page structs necessary to describe the HugeTLB. The only use of the remaining - * pages of page structs (page 1 to page 7) is to point to page->compound_head. - * Therefore, we can remap pages 1 to 7 to page 0. Only 1 page of page structs - * will be used for each HugeTLB page. This will allow us to free the remaining - * 7 pages to the buddy allocator. - * - * Here is how things look after remapping. - * - * HugeTLB struct pages(8 pages) page frame(8 pages) - * +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ - * | | | 0 | -------------> | 0 | - * | | +-----------+ +-----------+ - * | | | 1 | ---------------^ ^ ^ ^ ^ ^ ^ - * | | +-----------+ | | | | | | - * | | | 2 | -----------------+ | | | | | - * | | +-----------+ | | | | | - * | | | 3 | -------------------+ | | | | - * | | +-----------+ | | | | - * | | | 4 | ---------------------+ | | | - * | PMD | +-----------+ | | | - * | level | | 5 | -----------------------+ | | - * | mapping | +-----------+ | | - * | | | 6 | -------------------------+ | - * | | +-----------+ | - * | | | 7 | ---------------------------+ - * | | +-----------+ - * | | - * | | - * | | - * +-----------+ - * - * When a HugeTLB is freed to the buddy system, we should allocate 7 pages for - * vmemmap pages and restore the previous mapping relationship. - * - * For the HugeTLB page of the pud level mapping. It is similar to the former. - * We also can use this approach to free (PAGE_SIZE - 1) vmemmap pages. - * - * Apart from the HugeTLB page of the pmd/pud level mapping, some architectures - * (e.g. aarch64) provides a contiguous bit in the translation table entries - * that hints to the MMU to indicate that it is one of a contiguous set of - * entries that can be cached in a single TLB entry. - * - * The contiguous bit is used to increase the mapping size at the pmd and pte - * (last) level. So this type of HugeTLB page can be optimized only when its - * size of the struct page structs is greater than 1 page. - * - * Notice: The head vmemmap page is not freed to the buddy allocator and all - * tail vmemmap pages are mapped to the head vmemmap page frame. So we can see - * more than one struct page struct with PG_head (e.g. 8 per 2 MB HugeTLB page) - * associated with each HugeTLB page. The compound_head() can handle this - * correctly (more details refer to the comment above compound_head()). + * See Documentation/vm/vmemmap_dedup.rst */ #define pr_fmt(fmt) "HugeTLB: " fmt From patchwork Mon Mar 7 12:24:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joao Martins X-Patchwork-Id: 12771697 Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com [205.220.177.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 938A63B3A for ; Mon, 7 Mar 2022 12:25:40 +0000 (UTC) Received: from pps.filterd (m0246631.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 227BtcK4006652; Mon, 7 Mar 2022 12:25:21 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : content-type : mime-version; s=corp-2021-07-09; bh=PfV+ptT9CU8LgEDYYu4UlxE6bjm418W8DR94/bGF/C4=; b=v/0pjBFCUbuB6O14jHwzs8z3xRnklOieXuy9ZfNVIejDhB8vIauuUb9Lxe5vXBCsT431 CWAHsZzhlTeqzoGEPanuPWSWbHSJSr2mG7tkGVNWF+eljkwf6Z8rZnsxTQqAQ4ZMvtFx yYkCu2C6x5BF2Y2OnkdwLKvMEBBIl8r9KWQPbR1luCyiad1M5bmPSLcSpoHe35VILGX3 vZPwQy5OEjWvdCKLGNxZ2yNXBBEIujDSXSJsdABQBPkCJsJAlxkSmHUOYZGtsR8DMQlq epQFqE3aixw1mqTwf6Kr/AXbpAS5E5qrZTcnwDY2V98NU8h8gSyVxL8PXBOFpgrpNeP0 fw== Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80]) by mx0b-00069f02.pphosted.com with ESMTP id 3ekxn2bn3t-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 07 Mar 2022 12:25:21 +0000 Received: from pps.filterd (userp3030.oracle.com [127.0.0.1]) by userp3030.oracle.com (8.16.1.2/8.16.1.2) with SMTP id 227CCHbW029583; Mon, 7 Mar 2022 12:25:20 GMT Received: from nam12-mw2-obe.outbound.protection.outlook.com (mail-mw2nam12lp2049.outbound.protection.outlook.com [104.47.66.49]) by userp3030.oracle.com with ESMTP id 3ekvyter28-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 07 Mar 2022 12:25:20 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=G/t30j/WA4DzBGntRimtkAnBybWsbR1tyGNcErrwVATyGGW+T1E6cLbA0AXbhYIYEfzUuE+bc20HIhCBlm3dbzyftLAS2oewdGq7Wvomooyv7qkejmHKZWSPS89ILXGa2m5Np86UpYVuxZ4vUNFVmYtGUK56NaUWrOLahZB0N1jis48yIeLATgt15jnDC24vJE+SHvdgZJszd72yB/58TAuekVum1LHC/DZ7SUiRXt6u3a77lpzQr5S7lil8hX82gxZtwaYVrezyUYvjaaGd42aerXpHR/0v462SISQDHIyJ4mLhczg0Vc/FcC3lryw8Xvgt8KKCQH7Xim+d2jQ7cw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=PfV+ptT9CU8LgEDYYu4UlxE6bjm418W8DR94/bGF/C4=; b=icc+BkrnV6tU85ZC05uNd2SXiQHLd54fULDM9+IPcfzQeYRqSyJCY84K+TXCzy90X6SpAawZ0Y9M//VPNXMlnlURFoukcrXgFoN3Qgtym03O35LX+QOGYisoL7Z1Cyt8GmNyyvfcB+3SA/iDo0Odxw+/Bt9cqqPAsacONkV7qaY9bJoMuSntfGWWqsG551A+VDQDTCExvpKma1G0FCh897jud7ZYmUt87oQ6zjJbZDEahQlfEwJGW4a5K6Ms4Sy8zoV/5kppjJxQ3ZRqCWEK4GRH3ayOHkXe/m7uuq3+f3e4BENeALRnMMGr+5hj5KUUJmJ+xx6+a+KpOLg/LlpFfg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=PfV+ptT9CU8LgEDYYu4UlxE6bjm418W8DR94/bGF/C4=; b=gDQMEhX+SyZPZGF2CUs+h0/+SnVMUCNxnH9fZzJd4kP29b5tOv66UTcxtnr8mRc7AlmhvjlBj175k/HB2CHfhvPPR3DvvzHvGMtrlEBFDmhu7rYmp9y4lflWILUVa+oxYOgxBW+xYknCj0A3hRZkJq7PCqnGAnnxJE+slR4lTdA= Received: from BLAPR10MB4835.namprd10.prod.outlook.com (2603:10b6:208:331::11) by DM6PR10MB3451.namprd10.prod.outlook.com (2603:10b6:5:61::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5038.15; Mon, 7 Mar 2022 12:25:17 +0000 Received: from BLAPR10MB4835.namprd10.prod.outlook.com ([fe80::750f:bf1d:1599:3406]) by BLAPR10MB4835.namprd10.prod.outlook.com ([fe80::750f:bf1d:1599:3406%5]) with mapi id 15.20.5038.027; Mon, 7 Mar 2022 12:25:17 +0000 From: Joao Martins To: linux-mm@kvack.org Cc: Dan Williams , Vishal Verma , Matthew Wilcox , Jason Gunthorpe , Jane Chu , Muchun Song , Mike Kravetz , Andrew Morton , Jonathan Corbet , Christoph Hellwig , nvdimm@lists.linux.dev, linux-doc@vger.kernel.org, Joao Martins Subject: [PATCH v8 4/5] mm/sparse-vmemmap: improve memory savings for compound devmaps Date: Mon, 7 Mar 2022 12:24:56 +0000 Message-Id: <20220307122457.10066-5-joao.m.martins@oracle.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20220307122457.10066-1-joao.m.martins@oracle.com> References: <20220307122457.10066-1-joao.m.martins@oracle.com> X-ClientProxiedBy: LO4P265CA0019.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:2ae::21) To BLAPR10MB4835.namprd10.prod.outlook.com (2603:10b6:208:331::11) Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 12aad827-83ca-48db-e330-08da003589f9 X-MS-TrafficTypeDiagnostic: DM6PR10MB3451:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ++Gqo33g3goHY589bm1f4m5lDxuUneekPYAd2jWoxi2+lTZrlzQ4x8PHEoPPy57XmI+tuO0Qc8Zzq7o0JLbZyiPa335sI/LwVlnvlEI5Q6JyjVBMmDrHP//XoJfNS0RBwG3rJzkcTuodf5EjgZ9XyhhAyuGAFJl7cY/L91E3S38soEnKfmenlDudpnKNQ+DmiVNh/XjCG5FYm6IRWW6DikttyJfXGOxfwHchiWJl50SwPVcUCs4UpzXxLUjGeaaadaFkoWXXlCQku/YU/cN9MILv4KDrFGM8QdsQqijUU0sgjnxZxnXQT89FLLf/Ggze6APdGlynH+0JrP98R1dOPGjEN0W5+A2/Jj3w3fS00WYcn6mFuqs2/cec8Nzp6pnGFmqZnbWfefhMEX9nu0evUwbixlzo/FaFaTqYyH+CNW1vDZX4frR1sbItZhwEQyjwIDGO+Sf8Rr85lnsgB259b+RBxe4sWc7htqzilU3EhW3HioZhXlGpP6r7/zI5vNu4+Sh3stCZ6fBvs7YxU463pz2pZCnCH762zLeUE5bIznehwuyuu48aL8q7IV6Bdqh/nfnFkEEAjnEW32/eyJkitdx/ySP16ezeQa0IcRL7dGQpeLRcjOonwEglzo6JbxctGb+1nUttfSLCK0JQz2SvfO6x6YNBQjXRA0yh9NTUhnek12JXv9286D11fVanmArR3sm8Ss4bRFSLY/NAv/a+KEgof3nKph3/udjd2tqJj0o= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB4835.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(6486002)(26005)(508600001)(6666004)(1076003)(107886003)(2616005)(186003)(6512007)(52116002)(86362001)(6506007)(83380400001)(38100700002)(8936002)(103116003)(5660300002)(7416002)(66946007)(8676002)(66476007)(66556008)(4326008)(30864003)(38350700002)(36756003)(2906002)(54906003)(6916009)(316002)(25903002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: DLelZITmikSFMpWfS4e+LpGV+W8lWYET7X4FzPa3fzUAIsyqBJ8taqUxhmY0l69l1iAQkwiGIZPmwdDg8mrn1S7NJrCIT4C4IMzqKOqBZ/YSx2QRmtOjdbf4hs6nlGkUrke1xnhZxj4jOP66PHPg5GIWM4p6z3Tk26P/Hj8JsA8bIjaCQyZfskwIEOLXzlho+06xOQKYcKYvgs4lQ/BJEE862y2qSJefYU3186FuD+EpkYH3u807Q6PYjW2uNjnd21+2DN2jmiVz6i7LftfEiInWWvoZL8Gg+llYDo/6F5612ljMhLlAHH5ay5RS+MdDi6K+bW9H6d+PyPjCSWyWDqd8Zpl1krvgrhYzPxzvzpFHI5rnWxbTaKUTIt9U+AJDnB6FQd3PI0Tntprke+R10kHbD3019b3wNrVkHTBmAynSAqy/mTxDQw0cKsFGsqjdTlGYphyAgR1JIPUAwBJ8xNFvnLKQfkzA2L+elTtawLmc1H4WP7Hot3yB68qJRfnUqtYq/lcBxuoeBwP3Ex5+wESGT56qzhWtn7EwKWbTWlmVmHQoItWIcDe8MCBUPEvcpg9CwI7pvgvIoiKVY/45tEvl1wTMe+811IhQ/OCeKNTSylIsJXHmbcOSVUg0xRa3ULg4X4gERPVeUs2fWBepeWG7Br8+DklyTZuMu55SZ5uI6XpZzF2NexJanIoBxMXbCIVgS0pBG7EHHhPgaV4/ZDhVH5947kw4xjEepqoJxqd4bIxYkbubaQJrtAPPgJAw7JHWkIjfSY3Zr5uWF8WSP6M9zd6xaS33ciDuskVHTELky0UrRmO/Ze5R5PrPZvI7CLkoKIWrwwunLkwbAuV0Q3JUiCkjzAFPuTKhIlvlYGRtG0RamVim9Z/NlCe2Wk39l+K4mquL7+7r1PN0dA2i3sK7qQFjUs5oTYPYCRg/BC/pCiNm1rrz90fSH67ryAEpMFgc7arHkotZy0b7EVj2y5XVeIiYc78TjfZ+9VMSWWvX0sDVNEHSSYbdBTHFfddwvmDH/8kt6oJqA/XbZ3AiXbOHIvcoQ/xatek7YKL6wkScesttv9wWdrHNZRxb4TgbZdR7//an0+BAhMKXTmkaf3HjZ8MkNYLZ+G60yqs8SL8Uk8o/2ym7nVNQ3U+zJ37jvtx4Y9L56kJWI7vm9xHzj7Ay6bUyY/tiykdYvosbXPCVnu25crQPpldftHrcoUbAH0AVXRWnDcxumGogzJFZCJ/baWYkZ5kJvwys2IjV7nQUJef5Lf9ON5jxB3b+2D17ECOmxP8QTipQJ4pfd6wJc27ewWKzCdodxWwpVLOEnLCpD+YjxbvitwBsHIvaR+HF+BShyGW4u1grA747GutmcUK58dWoJiQP+ZyLdTJ5NT7BtexqMcgjuTqizLnWf9Jnlkk0wCsPV7STQMsVcMdYVTC8jfpWN2zh6c6v4WSLWR3JYTRKzPuHDd5XfWjQezA3SXWkZWxEDoeL+6PfB1ZMyqi5LuSt0ujCIF15hdLlrQrCsPriq2UqKgXbvsccufpuU7HRyTu29Wkcw3yfD6sOwpatOhL6bx/L+CjvXXvkedlVnjnFvj1H5OHOjclOJsbcnS/Oct2yd+EkQnyuU5CeGItsD2oJi/3rM8g2FEEIqnNsbNSLMh4M1/eLo21EFEEW2AOh09nt7l32OHtN7gcvWw== X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: 12aad827-83ca-48db-e330-08da003589f9 X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB4835.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Mar 2022 12:25:17.6967 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 58FhGayHNYued726hRIYGPdQ86MS6ropbi4wUSWCLYYj0W7DlvICLvzTxviA7ApQwJubiEyWpybOEnLOmnpVFvAnO3vI9klLoxW61gBe1xQ= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR10MB3451 X-Proofpoint-Virus-Version: vendor=nai engine=6300 definitions=10278 signatures=690470 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0 mlxscore=0 suspectscore=0 bulkscore=0 mlxlogscore=999 adultscore=0 spamscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2202240000 definitions=main-2203070071 X-Proofpoint-ORIG-GUID: iI5c2Cj8UHVO-UNMTJooo-v3qt__kGUG X-Proofpoint-GUID: iI5c2Cj8UHVO-UNMTJooo-v3qt__kGUG A compound devmap is a dev_pagemap with @vmemmap_shift > 0 and it means that pages are mapped at a given huge page alignment and utilize uses compound pages as opposed to order-0 pages. Take advantage of the fact that most tail pages look the same (except the first two) to minimize struct page overhead. Allocate a separate page for the vmemmap area which contains the head page and separate for the next 64 pages. The rest of the subsections then reuse this tail vmemmap page to initialize the rest of the tail pages. Sections are arch-dependent (e.g. on x86 it's 64M, 128M or 512M) and when initializing compound devmap with big enough @vmemmap_shift (e.g. 1G PUD) it may cross multiple sections. The vmemmap code needs to consult @pgmap so that multiple sections that all map the same tail data can refer back to the first copy of that data for a given gigantic page. On compound devmaps with 2M align, this mechanism lets 6 pages be saved out of the 8 necessary PFNs necessary to set the subsection's 512 struct pages being mapped. On a 1G compound devmap it saves 4094 pages. Altmap isn't supported yet, given various restrictions in altmap pfn allocator, thus fallback to the already in use vmemmap_populate(). It is worth noting that altmap for devmap mappings was there to relieve the pressure of inordinate amounts of memmap space to map terabytes of pmem. With compound pages the motivation for altmaps for pmem gets reduced. Signed-off-by: Joao Martins Reviewed-by: Muchun Song --- Documentation/vm/vmemmap_dedup.rst | 56 +++++++++++- include/linux/mm.h | 2 +- mm/memremap.c | 1 + mm/sparse-vmemmap.c | 132 ++++++++++++++++++++++++++--- 4 files changed, 177 insertions(+), 14 deletions(-) diff --git a/Documentation/vm/vmemmap_dedup.rst b/Documentation/vm/vmemmap_dedup.rst index 485ccf4f7b10..c9c495f62d12 100644 --- a/Documentation/vm/vmemmap_dedup.rst +++ b/Documentation/vm/vmemmap_dedup.rst @@ -1,8 +1,11 @@ .. SPDX-License-Identifier: GPL-2.0 -================================== -Free some vmemmap pages of HugeTLB -================================== +========================================= +A vmemmap diet for HugeTLB and Device DAX +========================================= + +HugeTLB +======= The struct page structures (page structs) are used to describe a physical page frame. By default, there is a one-to-one mapping from a page frame to @@ -171,3 +174,50 @@ tail vmemmap pages are mapped to the head vmemmap page frame. So we can see more than one struct page struct with PG_head (e.g. 8 per 2 MB HugeTLB page) associated with each HugeTLB page. The compound_head() can handle this correctly (more details refer to the comment above compound_head()). + +Device DAX +========== + +The device-dax interface uses the same tail deduplication technique explained +in the previous chapter, except when used with the vmemmap in +the device (altmap). + +The following page sizes are supported in DAX: PAGE_SIZE (4K on x86_64), +PMD_SIZE (2M on x86_64) and PUD_SIZE (1G on x86_64). + +The differences with HugeTLB are relatively minor. + +It only use 3 page structs for storing all information as opposed +to 4 on HugeTLB pages. + +There's no remapping of vmemmap given that device-dax memory is not part of +System RAM ranges initialized at boot. Thus the tail page deduplication +happens at a later stage when we populate the sections. HugeTLB reuses the +the head vmemmap page representing, whereas device-dax reuses the tail +vmemmap page. This results in only half of the savings compared to HugeTLB. + +Deduplicated tail pages are not mapped read-only. + +Here's how things look like on device-dax after the sections are populated:: + + +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ + | | | 0 | -------------> | 0 | + | | +-----------+ +-----------+ + | | | 1 | -------------> | 1 | + | | +-----------+ +-----------+ + | | | 2 | ----------------^ ^ ^ ^ ^ ^ + | | +-----------+ | | | | | + | | | 3 | ------------------+ | | | | + | | +-----------+ | | | | + | | | 4 | --------------------+ | | | + | PMD | +-----------+ | | | + | level | | 5 | ----------------------+ | | + | mapping | +-----------+ | | + | | | 6 | ------------------------+ | + | | +-----------+ | + | | | 7 | --------------------------+ + | | +-----------+ + | | + | | + | | + +-----------+ diff --git a/include/linux/mm.h b/include/linux/mm.h index 5f549cf6a4e8..ad7a845f15b8 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3118,7 +3118,7 @@ p4d_t *vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node); pud_t *vmemmap_pud_populate(p4d_t *p4d, unsigned long addr, int node); pmd_t *vmemmap_pmd_populate(pud_t *pud, unsigned long addr, int node); pte_t *vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node, - struct vmem_altmap *altmap); + struct vmem_altmap *altmap, struct page *reuse); void *vmemmap_alloc_block(unsigned long size, int node); struct vmem_altmap; void *vmemmap_alloc_block_buf(unsigned long size, int node, diff --git a/mm/memremap.c b/mm/memremap.c index 2e9148a3421a..a6be2f5bf443 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -307,6 +307,7 @@ void *memremap_pages(struct dev_pagemap *pgmap, int nid) { struct mhp_params params = { .altmap = pgmap_altmap(pgmap), + .pgmap = pgmap, .pgprot = PAGE_KERNEL, }; const int nr_range = pgmap->nr_range; diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 1b30a82f285e..642e4c8467b6 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -533,16 +533,31 @@ void __meminit vmemmap_verify(pte_t *pte, int node, } pte_t * __meminit vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node, - struct vmem_altmap *altmap) + struct vmem_altmap *altmap, + struct page *reuse) { pte_t *pte = pte_offset_kernel(pmd, addr); if (pte_none(*pte)) { pte_t entry; void *p; - p = vmemmap_alloc_block_buf(PAGE_SIZE, node, altmap); - if (!p) - return NULL; + if (!reuse) { + p = vmemmap_alloc_block_buf(PAGE_SIZE, node, altmap); + if (!p) + return NULL; + } else { + /* + * When a PTE/PMD entry is freed from the init_mm + * there's a a free_pages() call to this page allocated + * above. Thus this get_page() is paired with the + * put_page_testzero() on the freeing path. + * This can only called by certain ZONE_DEVICE path, + * and through vmemmap_populate_compound_pages() when + * slab is available. + */ + get_page(reuse); + p = page_to_virt(reuse); + } entry = pfn_pte(__pa(p) >> PAGE_SHIFT, PAGE_KERNEL); set_pte_at(&init_mm, addr, pte, entry); } @@ -609,7 +624,8 @@ pgd_t * __meminit vmemmap_pgd_populate(unsigned long addr, int node) } static pte_t * __meminit vmemmap_populate_address(unsigned long addr, int node, - struct vmem_altmap *altmap) + struct vmem_altmap *altmap, + struct page *reuse) { pgd_t *pgd; p4d_t *p4d; @@ -629,7 +645,7 @@ static pte_t * __meminit vmemmap_populate_address(unsigned long addr, int node, pmd = vmemmap_pmd_populate(pud, addr, node); if (!pmd) return NULL; - pte = vmemmap_pte_populate(pmd, addr, node, altmap); + pte = vmemmap_pte_populate(pmd, addr, node, altmap, reuse); if (!pte) return NULL; vmemmap_verify(pte, node, addr, addr + PAGE_SIZE); @@ -639,13 +655,14 @@ static pte_t * __meminit vmemmap_populate_address(unsigned long addr, int node, static int __meminit vmemmap_populate_range(unsigned long start, unsigned long end, int node, - struct vmem_altmap *altmap) + struct vmem_altmap *altmap, + struct page *reuse) { unsigned long addr = start; pte_t *pte; for (; addr < end; addr += PAGE_SIZE) { - pte = vmemmap_populate_address(addr, node, altmap); + pte = vmemmap_populate_address(addr, node, altmap, reuse); if (!pte) return -ENOMEM; } @@ -656,7 +673,95 @@ static int __meminit vmemmap_populate_range(unsigned long start, int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap) { - return vmemmap_populate_range(start, end, node, altmap); + return vmemmap_populate_range(start, end, node, altmap, NULL); +} + +/* + * For compound pages bigger than section size (e.g. x86 1G compound + * pages with 2M subsection size) fill the rest of sections as tail + * pages. + * + * Note that memremap_pages() resets @nr_range value and will increment + * it after each range successful onlining. Thus the value or @nr_range + * at section memmap populate corresponds to the in-progress range + * being onlined here. + */ +static bool __meminit reuse_compound_section(unsigned long start_pfn, + struct dev_pagemap *pgmap) +{ + unsigned long nr_pages = pgmap_vmemmap_nr(pgmap); + unsigned long offset = start_pfn - + PHYS_PFN(pgmap->ranges[pgmap->nr_range].start); + + return !IS_ALIGNED(offset, nr_pages) && nr_pages > PAGES_PER_SUBSECTION; +} + +static pte_t * __meminit compound_section_tail_page(unsigned long addr) +{ + pte_t *pte; + + addr -= PAGE_SIZE; + + /* + * Assuming sections are populated sequentially, the previous section's + * page data can be reused. + */ + pte = pte_offset_kernel(pmd_off_k(addr), addr); + if (!pte) + return NULL; + + return pte; +} + +static int __meminit vmemmap_populate_compound_pages(unsigned long start_pfn, + unsigned long start, + unsigned long end, int node, + struct dev_pagemap *pgmap) +{ + unsigned long size, addr; + pte_t *pte; + int rc; + + if (reuse_compound_section(start_pfn, pgmap)) { + pte = compound_section_tail_page(start); + if (!pte) + return -ENOMEM; + + /* + * Reuse the page that was populated in the prior iteration + * with just tail struct pages. + */ + return vmemmap_populate_range(start, end, node, NULL, + pte_page(*pte)); + } + + size = min(end - start, pgmap_vmemmap_nr(pgmap) * sizeof(struct page)); + for (addr = start; addr < end; addr += size) { + unsigned long next = addr, last = addr + size; + + /* Populate the head page vmemmap page */ + pte = vmemmap_populate_address(addr, node, NULL, NULL); + if (!pte) + return -ENOMEM; + + /* Populate the tail pages vmemmap page */ + next = addr + PAGE_SIZE; + pte = vmemmap_populate_address(next, node, NULL, NULL); + if (!pte) + return -ENOMEM; + + /* + * Reuse the previous page for the rest of tail pages + * See layout diagram in Documentation/vm/vmemmap_dedup.rst + */ + next += PAGE_SIZE; + rc = vmemmap_populate_range(next, last, node, NULL, + pte_page(*pte)); + if (rc) + return -ENOMEM; + } + + return 0; } struct page * __meminit __populate_section_memmap(unsigned long pfn, @@ -665,12 +770,19 @@ struct page * __meminit __populate_section_memmap(unsigned long pfn, { unsigned long start = (unsigned long) pfn_to_page(pfn); unsigned long end = start + nr_pages * sizeof(struct page); + int r; if (WARN_ON_ONCE(!IS_ALIGNED(pfn, PAGES_PER_SUBSECTION) || !IS_ALIGNED(nr_pages, PAGES_PER_SUBSECTION))) return NULL; - if (vmemmap_populate(start, end, nid, altmap)) + if (is_power_of_2(sizeof(struct page)) && + pgmap && pgmap_vmemmap_nr(pgmap) > 1 && !altmap) + r = vmemmap_populate_compound_pages(pfn, start, end, nid, pgmap); + else + r = vmemmap_populate(start, end, nid, altmap); + + if (r < 0) return NULL; return pfn_to_page(pfn); From patchwork Mon Mar 7 12:24:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joao Martins X-Patchwork-Id: 12771699 Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com [205.220.177.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EF7763B39 for ; Mon, 7 Mar 2022 12:25:39 +0000 (UTC) Received: from pps.filterd (m0246631.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 227BtcK5006652; Mon, 7 Mar 2022 12:25:22 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : content-type : mime-version; s=corp-2021-07-09; bh=v17sfs+U0QGJqsJY0qJ6/XcOeTsameaPQRSKg+GMctI=; b=ACQ6ivBsxMbFkuDJBjXFeD5U52hIhfOusQhakRWy6hNdxJm6b63PnFj0xCiBMQWyG0sa dd76ylf8bNeHp9iIN1ofcg9Ll4iFovjGK7A254npgLjvtbl9jnB00ast/ddYeSiRv+fL 3m+XwQ+z2JIbXE3n2gAdgM7uL8oQCBU3LRyFcsWpg6rJ29ppr2oFIg7EVTPpVmza1W1t Qw50fmmR6/ZlTDldJgRQ5lVqcVcP/eI/MR5Yqgi4vCEcrisdxVrFqyR5KhWRqzs0f3BT c6m7herm1HQEeS4m++/ifHQ9s52LFRV898T6D1wwjD2gpjQx2wnzGxI5brcMk9Sw6xam vQ== Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80]) by mx0b-00069f02.pphosted.com with ESMTP id 3ekxn2bn3u-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 07 Mar 2022 12:25:22 +0000 Received: from pps.filterd (userp3030.oracle.com [127.0.0.1]) by userp3030.oracle.com (8.16.1.2/8.16.1.2) with SMTP id 227CCHbX029583; Mon, 7 Mar 2022 12:25:21 GMT Received: from nam12-mw2-obe.outbound.protection.outlook.com (mail-mw2nam12lp2049.outbound.protection.outlook.com [104.47.66.49]) by userp3030.oracle.com with ESMTP id 3ekvyter28-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 07 Mar 2022 12:25:21 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Bd98RgoRsi42eBAgevBQADmrYgIleVjzJozpMA6u2lSs8hc+uV+vqKHSu3S1DJmidl3qpKGB6GDm3ei7DXEE8XNAG8NPx2z2OC4mMHqVGsrfSBmJUKQNVyQfSd2EgVLcJXztmcZtNuJ7JjM2c6ql1M0I9R6x/XE443lntftcYdYS1MdcRf1/jGF7FcZvoH7sJioaS94k+ZD8otLYb/LsUTGR3Ybd0GSZL4c414jgQTqRyd51xXPPwrD5eNeK9gFlfrL/GQq+nIRQWx25RZNbwTt3wUJSt46el2V2/7t3Xvpi5d3UjTU+FAgPI/0PuBG/MgkNJVndlyq7IbBvfIszCA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=v17sfs+U0QGJqsJY0qJ6/XcOeTsameaPQRSKg+GMctI=; b=Yk17FVWV2rZFiODDHT2hV7JlgZSozS94LWKhjRsoBOlL9OatB0eJKm9XcjcL0+zzapKbEtVUVycFD5vPGfVGktDqfGcebVPIONXzwxb9/7Kluw+AYJNj52aeMlIEGE1uyhdmegu6SdsbL4gzCtl16XgW1wEJqd3zUP8E368whn03XQk7m4qP2Pn34xFZZfbo+MdJcNV9m2+8EDUKIWbmJGLTvqfgnRdStc8k6j2mhtBaU+qj7RiLT/yAaDM/1UuKKs6MlpGrBMayX45PhY64uEq3R4RrARpT1DooCk/5cdYtSaoUykEvK0jkW39goDQuc5rCPchTO1rHGVc9qhuZfQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=v17sfs+U0QGJqsJY0qJ6/XcOeTsameaPQRSKg+GMctI=; b=KmuNBf0ylLgwfMxI3MguJtGd0JP3fXb9l10Lpg+jS03TKzligEsgFmcRSticPNXWjNFrTgnhSlrkSCo0GI4zxcdVIKotmpCEaMCkKBHfVyLVEjBj2WGMBxzVEls8pf1B4h2xrrBVwi0XJZznda2j+ecTO7NF2o2/wTOwy3wzyJc= Received: from BLAPR10MB4835.namprd10.prod.outlook.com (2603:10b6:208:331::11) by DM6PR10MB3451.namprd10.prod.outlook.com (2603:10b6:5:61::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5038.15; Mon, 7 Mar 2022 12:25:20 +0000 Received: from BLAPR10MB4835.namprd10.prod.outlook.com ([fe80::750f:bf1d:1599:3406]) by BLAPR10MB4835.namprd10.prod.outlook.com ([fe80::750f:bf1d:1599:3406%5]) with mapi id 15.20.5038.027; Mon, 7 Mar 2022 12:25:20 +0000 From: Joao Martins To: linux-mm@kvack.org Cc: Dan Williams , Vishal Verma , Matthew Wilcox , Jason Gunthorpe , Jane Chu , Muchun Song , Mike Kravetz , Andrew Morton , Jonathan Corbet , Christoph Hellwig , nvdimm@lists.linux.dev, linux-doc@vger.kernel.org, Joao Martins Subject: [PATCH v8 5/5] mm/page_alloc: reuse tail struct pages for compound devmaps Date: Mon, 7 Mar 2022 12:24:57 +0000 Message-Id: <20220307122457.10066-6-joao.m.martins@oracle.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20220307122457.10066-1-joao.m.martins@oracle.com> References: <20220307122457.10066-1-joao.m.martins@oracle.com> X-ClientProxiedBy: LO4P265CA0019.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:2ae::21) To BLAPR10MB4835.namprd10.prod.outlook.com (2603:10b6:208:331::11) Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 892fc0ea-119b-4f06-38af-08da00358b69 X-MS-TrafficTypeDiagnostic: DM6PR10MB3451:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Ny5dGCuQUFoMeqjWma4sXkpPkgFOqcuUti/569I4C+YZ1cSnUPkpP5MTqWUhVBwTVamBMZyl2p+A+vTpDOHy9gioRWktFE8Ymc0VAgo7Wd0ebxlHpgM5k66FQNG4K84/H03Ps//c4d7Tdmxt4Lx6P44ykmLl+isTcoe3eelqHouTkdmuGucH7yNHc4pKjTI9NDye1LJUpm4vmuOWhxEXZndcrX6RoR77GB/PaC/pYF53g6CAQ9wKepnSpHTGEEMi2uF7OIL2/CxeMkVXMJYoGgAwt01UEW/iPdZNe5zWnNgPGxdAGmhE4APuYB4DuHB1AQtL1razFwid0KoACY8gSEi4ra919yaol1SinLq526NgNotMZBUPdwnDrMQwJdyLFYj1khlOA29vNTw5EStvFJH61KIp1V0UP5BI2OjzFYX3B33OoIpUwhP1GTIDJtk25CIBXr6hshlbtdSmBog3Cjftyof6vWA/wXTmri6KEcGQIAIiJLTDdYgT2KETDRprdbCYDBOqQ2yV6yVyTPDOVD/b0uWDkZeaf7YRrSPGuSuADlLaDfl4mjplPBoh4aoAAoia2W3M3J6Tlf/LLK8NTHUYQJupIfhUAndTPPaBB76KTBD9meSwMShRr1HWOxFuCjsetTLtyp2+MNgd7odzgbMJmt2u+8lXbrJAehKyW/iVKKjXIsRMOeAhGHhof6Cz53FLSzpEnSFtftwg6CbRgQ== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB4835.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(366004)(6486002)(26005)(508600001)(6666004)(1076003)(107886003)(2616005)(186003)(6512007)(52116002)(86362001)(6506007)(83380400001)(38100700002)(8936002)(103116003)(5660300002)(7416002)(66946007)(8676002)(66476007)(66556008)(4326008)(38350700002)(36756003)(2906002)(54906003)(6916009)(316002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: SsMQdHdQcWycS6Gro83oQxwGwilbgyyA4CU0/75+EFRTjXf2ttVz5tlX12qyao22sLUH/kELAZS4aE05hhy9nONrOeIt+O4h8eFAZk+tTr0RNdsd85Q33cLobY0orLaZGpy9NsWVaOaPAvxXtZ8VpcrHK4x+ieXzcmhvs8yP5twmuccvW7QsvpZj6Lgy1KVy5UOxMbnd1goRTWs7hCFZo//moBexwo9NEPV5+PTWjY8Jx8aQowKPu+PtO7Jy93rfZ8CnjOV64caBjBv3cYF76nPMmUIdnHu/bC+PlY1fgOziD0lJwlAx03eca8YJlGG9gdMoLcVGlfs++A96TejD3Ag10YKPMDKh4PBNVDxICFsy4ixE/zvIo3zUHO0pImWV03Z2sKmypJEkVungXc7QMZjgCl0oYgvaGoM7vqTsm6Tns14Dffm6rxieVeYesXBI9enC4J8bBipOd156oiL47k1SchBhrp7J4SbBgEQ/7inJcztDnN8tGxzdO+1L10iCMQomWEKoHDSZlHovsmDkTteOg0rhx9SUswKCbmbxQHXybsO0G3DhLx/iB2r0+1nEzyDIzyRc4pQwCkOW0Q3FZ/hk2H61ftgI5BRxEHOYrl7Eh0IEUaBOojouRLol45A/5Uzbf4zFvLc4yTjgkeg+eJw/gZLir/uu2VcD++wyHiayGrF+2UEGo7B7l0R1MFUvhZNIo3AkOidafqY8lTWq6V1nzqiYc5HQmOCHK0M84Rc0Nik6fRtLJY2+A4ueaAX1WoEbQ6qe0ACWa6hm2L27BRU0TB6ND3SlurrCCO89i25EjQdnAUeiPV3Pj9rS9gufW7iBCO7A5amyFe9D5C9I94OakThA5JDR4US1vQhkE8PCDZt/00VkqtKuO2YUruscEYJKsaZ1RJZ/NVEkaTL8JtkjcGxtZ563l5QZQdMu2dLzdnjCQD0ChLSLBQIHITGnxb/DKhLMa2UlZKMMdXZZznCFdrqiZZLlrzN210qKZSmSNJEWC6NpROoEeRS7aEbdCKBblC2+uBwT5PIHp7A/kOXMWLbBC36123B3U+SHqFe2LZoah79A+U7ukcWNJ8gKCBjNNMpen5mMRuO1kv6im0e+GafAadGdafKqeCkikqYsdmoZHqaetzZ9BRd+AQ4/VOoY9guupCupj4lgW5+s9F07Fpc7nqzh3G6mlDFQcwI0ueV0qYjfGAl6xrJJ/ggzZfS7dfdyqmms5Fe/bJPgvpyI7OSocUuXss0zTk/snK9gKWGcMIyBFxwge0FKcMT7Jb7zrzul3wu7B7mv+owsHawgiSIx8SjGTvu3PSuNaUbQPQ+D0oXBMHwdyiVFoQ23dKU6KWwtQ9jrNMVsnQvHpp1oWOn6u8B6ihBOsJDWgzWgnPU3nGEk5LbcyO+mgw4QTs/PyqMURpXgJSNfOm9oWgqio+0jN8tqdS/5AcjhFcoTDq1x5KwN60RrcAMc8LgodG/pYwzPIRgesC7FecN3hijmus9UF+AZxDHwB6nZ6N7wEBbg45z9FetShsHJVC4Z2125zqeJLNTIQ54Pw31Fifmi/xyYAnMGS3Izbtmxk0cjVKyem9i/TTASY8dAb9vWx58e36d3rJ519yqHwK31TUAlq51pBbmNm2jWZw0Oid30XX3jdFSIIJuu8vge3siiLZy6ffNy0SRA+a51M9ZNBw== X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: 892fc0ea-119b-4f06-38af-08da00358b69 X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB4835.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Mar 2022 12:25:19.9948 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: S5VSmuqHnAyGLzoz8hKW2JuK0w/3M7J0XmgCLa5gO8YbokmSxV7Gxfsec6Q5ufmEsT4qM9ukk3ioWdjrVt+X7CCbv16qMcogjxsANRcw+WI= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR10MB3451 X-Proofpoint-Virus-Version: vendor=nai engine=6300 definitions=10278 signatures=690470 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0 mlxscore=0 suspectscore=0 bulkscore=0 mlxlogscore=999 adultscore=0 spamscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2202240000 definitions=main-2203070071 X-Proofpoint-ORIG-GUID: vCtYCLxras5pDftoLFFo3oMq3T-GApMk X-Proofpoint-GUID: vCtYCLxras5pDftoLFFo3oMq3T-GApMk Currently memmap_init_zone_device() ends up initializing 32768 pages when it only needs to initialize 128 given tail page reuse. That number is worse with 1GB compound pages, 262144 instead of 128. Update memmap_init_zone_device() to skip redundant initialization, detailed below. When a pgmap @vmemmap_shift is set, all pages are mapped at a given huge page alignment and use compound pages to describe them as opposed to a struct per 4K. With @vmemmap_shift > 0 and when struct pages are stored in ram (!altmap) most tail pages are reused. Consequently, the amount of unique struct pages is a lot smaller than the total amount of struct pages being mapped. The altmap path is left alone since it does not support memory savings based on compound pages devmap. Signed-off-by: Joao Martins Reviewed-by: Muchun Song --- mm/page_alloc.c | 17 ++++++++++++++++- 1 file changed, 16 insertions(+), 1 deletion(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e0c1e6bb09dd..d969b27f7b56 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -6653,6 +6653,21 @@ static void __ref __init_zone_device_page(struct page *page, unsigned long pfn, } } +/* + * With compound page geometry and when struct pages are stored in ram most + * tail pages are reused. Consequently, the amount of unique struct pages to + * initialize is a lot smaller that the total amount of struct pages being + * mapped. This is a paired / mild layering violation with explicit knowledge + * of how the sparse_vmemmap internals handle compound pages in the lack + * of an altmap. See vmemmap_populate_compound_pages(). + */ +static inline unsigned long compound_nr_pages(struct vmem_altmap *altmap, + unsigned long nr_pages) +{ + return is_power_of_2(sizeof(struct page)) && + !altmap ? 2 * (PAGE_SIZE / sizeof(struct page)) : nr_pages; +} + static void __ref memmap_init_compound(struct page *head, unsigned long head_pfn, unsigned long zone_idx, int nid, @@ -6717,7 +6732,7 @@ void __ref memmap_init_zone_device(struct zone *zone, continue; memmap_init_compound(page, pfn, zone_idx, nid, pgmap, - pfns_per_compound); + compound_nr_pages(altmap, pfns_per_compound)); } pr_info("%s initialised %lu pages in %ums\n", __func__,