From patchwork Tue Apr 8 16:54:03 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 14043446 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0DCBCC369A5 for ; Tue, 8 Apr 2025 17:14:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:MIME-Version:Content-Type: Content-Transfer-Encoding:References:In-Reply-To:Message-ID:Date:Subject:Cc: To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=T3u7AsFxB746z0bYN8M2y9aXb/pCnGPljTmYm951kXU=; b=bmAs7cMMrFnM5bWYwvrTumOqFx 9K/p4b4ANK6BpfSDD8nTs47PslOZKPFeJ6BbHFu/JTEfwEz7NxvIe094dkkrbnCo5rpxLtZmXyEKh w2C+cByC0GQhtuEe7Iq/9J0dni+raMumw+OnQAVG7n+Znujn2AfpSImcUV/20wjHGoyTTJDXXWCDx EoaMpn3g7PdRIjKPMM+KBYHMpjDiIosQitLahzZfqfk0ub1fkRxQ6mM/A5gCPkjFpCag6JaU3yIhO +KEsaqS8hK+UqnMmL3fQDb6/KGQ4t3R3bWxX+UXm5Czo6TpwpQt4gFZTWNcH9Xj61vi1KqYrve8GV aymwsVHQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1u2CWf-00000004wIW-2T8D; Tue, 08 Apr 2025 17:14:17 +0000 Received: from mail-bn7nam10on20601.outbound.protection.outlook.com ([2a01:111:f403:2009::601] helo=NAM10-BN7-obe.outbound.protection.outlook.com) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1u2CDV-00000004sGe-3anQ; Tue, 08 Apr 2025 16:54:33 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=d5l2NT1rI6NrOzzGGM+HQDPUBbGr+bxGPPcGO/grpnpNyJ2eyMwA9xpkIkkQgzQlqA18n3vqh/uQApVHeTSTm/BATbA3WYFQ2T76JFy/zGcHF4ug1GEt8KdrFk/VNNcmPDHaYvL2pBj6RWIJUg0QPkV0//JBmr+1Iyj48jBiY1xuXcnnn0iG91d7TTCoPJyyGo/o2HvAb9SdIYbtDrORZHxZE/k16DrBSQI/mZWAC7IWRtW75/OTUS5Dh4tfAnMTZGLYeU+KwWyaTgz1AUzBS6eKxp1T42qDUKvApOlA7D0AgVuqaO0exwBFL94CLzApWi7gNOc+973FeCxNcuECow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=T3u7AsFxB746z0bYN8M2y9aXb/pCnGPljTmYm951kXU=; b=nDCuxQvQ83VcHibFb61eLBdhjzUeJ3DIjADKWjjdrLJln4fRFvK5r4gbF7hJICfU+RaZaCcUZOR/o+bCUaZegHgTS1MugYJ6VzkNz5x+/6bNiV5En32zhliNUA6URLHcO2K2qXOvz8hYKVJARmW8Z8XG8gV1rzcXxCUs5gGmqvV2BduhT1i17dmsPLK/Qen7SnN0FEjV1ObYOZjplifkWzBQ66y1Pa42pQYuB/nKrNAzFsRzt2ifjdjxapXgZtf2OuLLqzMxwf99J23N913OGFblHkDxKmsUxk6a+zML0tiW/6IY+GGBY/XMTWsK0J0LyFdAfyI3kLCDpDO7EqIeHQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=T3u7AsFxB746z0bYN8M2y9aXb/pCnGPljTmYm951kXU=; b=T728knmCmvjX5Zh1h04H36FO7XFigbxZil4zJcFxcSx1pe4of3CmPRXVX6KHohq2KNkb3uiXhvV1HHEccU38WhiDucPKJy4x+UIwWE6VCbQMLUqsZZmvKUKk42jir7G1sR8+A+nDaPNiZYR2BQ5lqKwbiNWZ8IdB3qBAvq1cfhe9cDNS2FLulPmOwTWt6ctoSo0yu9/7R//vEEHenX98+o0N7u4+X8ODboTkibWCIxzqXqTPW8R3B4M04yJ7SdBQ6nT54ALDlxYhBgCy7K/9AXMBrwcMl5k8+LhJAgKiDgpsn3ObuKFtsfwbwmpLCb6HDQpFIatvcyviVEkFjSLSSw== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from CH3PR12MB8659.namprd12.prod.outlook.com (2603:10b6:610:17c::13) by SN7PR12MB8001.namprd12.prod.outlook.com (2603:10b6:806:340::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8606.34; Tue, 8 Apr 2025 16:54:23 +0000 Received: from CH3PR12MB8659.namprd12.prod.outlook.com ([fe80::6eb6:7d37:7b4b:1732]) by CH3PR12MB8659.namprd12.prod.outlook.com ([fe80::6eb6:7d37:7b4b:1732%4]) with mapi id 15.20.8606.028; Tue, 8 Apr 2025 16:54:23 +0000 From: Jason Gunthorpe To: Alexandre Ghiti , Alim Akhtar , Alyssa Rosenzweig , Albert Ou , asahi@lists.linux.dev, David Woodhouse , Heiko Stuebner , iommu@lists.linux.dev, Janne Grunau , Jernej Skrabec , Jonathan Hunter , Joerg Roedel , Krzysztof Kozlowski , linux-arm-kernel@lists.infradead.org, linux-riscv@lists.infradead.org, linux-rockchip@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-sunxi@lists.linux.dev, linux-tegra@vger.kernel.org, Marek Szyprowski , Neal Gompa , Palmer Dabbelt , Paul Walmsley , Robin Murphy , Samuel Holland , Suravee Suthikulpanit , Sven Peter , Thierry Reding , Tomasz Jeznach , Krishna Reddy , Chen-Yu Tsai , Will Deacon Cc: Alejandro Jimenez , Bagas Sanjaya , Lu Baolu , Joerg Roedel , Nicolin Chen , Pasha Tatashin , patches@lists.linux.dev, David Rientjes , Mostafa Saleh , Matthew Wilcox Subject: [PATCH v4 15/23] iommu/pages: Allow sub page sizes to be passed into the allocator Date: Tue, 8 Apr 2025 13:54:03 -0300 Message-ID: <15-v4-c8663abbb606+3f7-iommu_pages_jgg@nvidia.com> In-Reply-To: <0-v4-c8663abbb606+3f7-iommu_pages_jgg@nvidia.com> References: X-ClientProxiedBy: MN2PR15CA0062.namprd15.prod.outlook.com (2603:10b6:208:237::31) To CH3PR12MB8659.namprd12.prod.outlook.com (2603:10b6:610:17c::13) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH3PR12MB8659:EE_|SN7PR12MB8001:EE_ X-MS-Office365-Filtering-Correlation-Id: ba9976d4-8768-4b62-a0f0-08dd76bdff71 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|7416014|366016|1800799024|921020; X-Microsoft-Antispam-Message-Info: A0XgeSTq3xTq5knkPYkOo40mcqqXlwYCjj7W87se6lrJ+TPU53bSabOiuynIlUV+auOKmedoXTrxYRNZfw3DpTk4rsW1KOtsqvhQC/qFzANYrZmcJG5EUYCr8L9V86B3EQDBmrVDb2WMIV6iTFfqHfYN/BI+T/vUdYHh4jce+1Ego1VkvhB/mTGfbtsv4Lt4JRnf7ooziOlIjq/Wo3kj256KjJbvVdFrxmLSoQfTPlJOYuje0bt//XFwm0BBVtxUfioJaKi/rZicULHFhFnzgjnLFOy6L+PRmL5IYfjHY0roguBplFrKSEAA88JViOa8BrLDFBHV9erk5Je8tKJV/ijP5Fz1Jarfilk9Wi6TYu1mytvt8glHZdYgPqJJmg4zj24GzUIT9inHhWkWuaWyP8SRnKiN6I44usPjTo4YRTz9P7TQ+MK3AkV/XvmtcKiQDqgq8hVNJvqe5HhB3HgYW/sEmTeY6yNqiSQvX7k+ePs4ssgMKUmyFP+AjG8h2Q82kiz4s64V3MPv8VC/XZXr7byxedCKe7CZXBqCKuau+uNW2R/m6mYCY9HWElMjFWWsYh3BRG8G+1cgTHuA+sGy5mPoq/ll4Njr6ZaHPY9Pd9KzxeSJ/hZTfCvLUcHGjVk/n9WNHwYboSPmuEM18w1n+0nKkGQqqSNc9bzdLdJDMD3bBW7T5OsUObXeQNQMY9174xpTKX1b94x2jn2lVmo9b5kRxYc02dP7lUPHhmVDnbM7L4RqMa2wCUthS2CAXz5/gVRaDvnqfmpvm2maPc5nRX5VHBMbyuTeOTKRNlCi61OYfHlPa2SL0NaXEFFPV4KkJXZfm6/w0pDLD8gTXodmSvCw9aqqj4ELIxy3nPApSsg1v9z/7OttG3By46U9jbUlIZitntFyGJWgfXxNjB1rc/F+IVGSO2Zvmhh1Dqu8/KgwH5tQUO/bcy/FDwMooAsg8TbluGr3JUaULjogMipXjuIr6U0CHaoxNYTE7UdjMpw3YRXSLYPtmQ23GiwbDX6pz5C0PM6GzjYqkqYgve+CzlV+Vm3Lzmk3XHuEtLciUiS1ea3ZoZJcZPcP8W3VBBXNejHPcOL0YVVPgU5Hlq5dAxOuqQBQ6mteRSKCRMXa35HwfAehrNUhr6uVyevsmnnx2Cts2AtJQfyuedeE+oVCyRlZ6LdC1B9h48FrIc/BFLcGD/m2S7FhDwSwVlL/gtEcAbn6WHH2o1FB/2EGnYZrP/p55tsjYLB01fcyn2e6w4l4XH6X4t/B8tBLz8XKtqgujPIkZKe4I9XRZWRl99aDVVgrR3sCu3l4GGhNrsMQ+VAetrKZk7hTP/eVH3hxmtGR23uy/XpbdxDgfJKYy++AZEYAbFl2yNhPdJlxfkuGFEkGwLX+KUP+9b4LdynJJUq/oDD6h933Su2eXaAu9NQ1PfVHtVXbac5Ura7YkY8JZv9H7MkPGCIYWOtPPP8IrWo+2LQ3zQGxr+mykGWpXUPeYw== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CH3PR12MB8659.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(376014)(7416014)(366016)(1800799024)(921020);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: 9+RjPH+gwmC2zHmZQQR/SJKCQszWlP4pmM6HPCGVFiKkeaMd4WLCJvW+QI37fUHWVmNbMC3e5lBT/FaBpe5LWak6qMgEhf9L2r6afMtBW8fwAW01psYPlKsqYrpYL6aZ9u3La5ulHiPyNsGAn1KRj2v4K/FObncholIjwrrTjohdMQfXq68Xj0Ie4vQroqSzQY8TysBqhP9jIAM3RgVI8OnktXbblYn7IyTYxANAjIAtALiaqtWQkm3tBygC8MfJco9+50wNvJHbNCczZi/2XBYudJqcPSQ/wQoxNsY8LrSrSdEUmu0ReOpKyeCbpKsaF6ap64KCQy8GR0sAJkIvnIGww+eCJdNMk3ppc3gZDFWdR0wgHBIVn4EvVMPFW+KFCPRpm3VSlu4QmkhlXPaqOH+8H+30buIrXRwshrVKoI/fHpyI5OJuasrDyJX0fYj3D+zFJkos4mVELrQ2Vsoy3H50ItdWFSVSxtXjbPJP/ELlQ389Hm7qzWz1R5MjCeEIL25NKwxydFiIwCFRjpyXiZP6nsphQr/mXIfhNAcI+W/fX3VRhkxFlUd973BWdfQtf8b71gBEpyNfLhwgGk43VwiuIQiv0TdNbAumhSFbtavPFEVxd7iMy3vOREGu0BzOYBuEs/b2H+H8vPGj3v6mk5+PkX5SZR786qSVBYL2MzWjyAyi5tYInm3tp8VHmN0w0U+yJh98xc6Ve7O3MKJkjcPOeWt2hch6CdcbVnq17h3YTldlflifgbY/GOaphIZt2sA0PHwWPoh9WXXEsErNmzcAIbjDeCBM7zuUPxRg9AHfU8O4JD9+ZjkxdLho0UF42w/Z04kXRE25PcjHGDJkSlpcJq8xkxESliVWTfkgxP4PGjzCmeSO3hmQE9Lz4cpvbwKUAVXTaI3Jui/brOTYnxJSQ+sQXBk8Ftl61YrthRpXYTSTmhn19uEPXanGS9CvZqqYAm59jUuxinanKpPo813zJ0zW6dIpQiMJri1OGI6PyCiP/3/wcMtBLgrbYFjWs5fHcs7EySJhf5E/UaQw6WhK+x9uCwWRJrA/vES2MEK6yyxChCVgSYKRL1/zJyuBlJv69YyAUXeLm38aCTtQeq2uiTOgFC6rwt6GHm/J3L7bHLfmabHYaDnccAgCuuQ3BwKSVDt8SShbgZgqiUPz9iscpVBRpRVoPN4lackLqrxdYb0/sZCuaovF3yPvimnuLkvj5HZuMRvtzDc7IFEabg5tXKXGQ7lkrm4xiIvgSrSm673WKMEb7yDX9TN0xOWinoQqq2sGx1StpbX3SCXvdMNknI1u2xXMbYCrlz2kHf3gWBdYd/yS8IWyZoY4mZRz6V7w30LoHVXUyI7ZeD6KKoRUMrEWwlJrtIuMB1+glA8EQrYZquVf0T30dPZyZLFPCXlefZZ3gmAgVM2XmUWAainQCtCMgZDGjWcfMmzYW8oto0dYl3DZ8XX8qkhvnxEAZq6ohIP9c4U9fbTBV4MYL34eM66EbyccpHS6it5C4IRssH7260rmdKqYhPEeYCyIJEF7tg+CJdgI5x6NIUmtfOhvNpBxCRjG7igdVzAGflkuOMnMwj9paBRJICxLuT1f X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: ba9976d4-8768-4b62-a0f0-08dd76bdff71 X-MS-Exchange-CrossTenant-AuthSource: CH3PR12MB8659.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Apr 2025 16:54:16.8280 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 7BJby5vuJxYjPgnRbe8kuDgwb2mu5PaHsMuujE9vS8MSrl3nkGvFoPSRnK8n84xG X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB8001 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250408_095430_010327_CDA492AC X-CRM114-Status: GOOD ( 19.28 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Generally drivers have a specific idea what their HW structure size should be. In a lot of cases this is related to PAGE_SIZE, but not always. ARM64, for example, allows a 4K IO page table size on a 64K CPU page table system. Currently we don't have any good support for sub page allocations, but make the API accommodate this by accepting a sub page size from the caller and rounding up internally. This is done by moving away from order as the size input and using size: size == 1 << (order + PAGE_SHIFT) Following patches convert drivers away from using order and try to specify allocation sizes independent of PAGE_SIZE. Reviewed-by: Lu Baolu Tested-by: Alejandro Jimenez Tested-by: Nicolin Chen Signed-off-by: Jason Gunthorpe --- drivers/iommu/iommu-pages.c | 29 +++++++++++++++--------- drivers/iommu/iommu-pages.h | 44 ++++++++++++++++++++++++++++++++----- include/linux/iommu.h | 6 ++--- 3 files changed, 61 insertions(+), 18 deletions(-) diff --git a/drivers/iommu/iommu-pages.c b/drivers/iommu/iommu-pages.c index a7eed09420a231..4cc77fddfeeb47 100644 --- a/drivers/iommu/iommu-pages.c +++ b/drivers/iommu/iommu-pages.c @@ -23,24 +23,32 @@ IOPTDESC_MATCH(memcg_data, memcg_data); static_assert(sizeof(struct ioptdesc) <= sizeof(struct page)); /** - * iommu_alloc_pages_node - Allocate a zeroed page of a given order from - * specific NUMA node + * iommu_alloc_pages_node_sz - Allocate a zeroed page of a given size from + * specific NUMA node * @nid: memory NUMA node id * @gfp: buddy allocator flags - * @order: page order + * @size: Memory size to allocate, rounded up to a power of 2 * - * Returns the virtual address of the allocated page. The page must be - * freed either by calling iommu_free_pages() or via iommu_put_pages_list(). + * Returns the virtual address of the allocated page. The page must be freed + * either by calling iommu_free_pages() or via iommu_put_pages_list(). The + * returned allocation is round_up_pow_two(size) big, and is physically aligned + * to its size. */ -void *iommu_alloc_pages_node(int nid, gfp_t gfp, unsigned int order) +void *iommu_alloc_pages_node_sz(int nid, gfp_t gfp, size_t size) { - const unsigned long pgcnt = 1UL << order; + unsigned long pgcnt; struct folio *folio; + unsigned int order; /* This uses page_address() on the memory. */ if (WARN_ON(gfp & __GFP_HIGHMEM)) return NULL; + /* + * Currently sub page allocations result in a full page being returned. + */ + order = get_order(size); + /* * __folio_alloc_node() does not handle NUMA_NO_NODE like * alloc_pages_node() did. @@ -61,12 +69,13 @@ void *iommu_alloc_pages_node(int nid, gfp_t gfp, unsigned int order) * This is necessary for the proper accounting as IOMMU state can be * rather large, i.e. multiple gigabytes in size. */ + pgcnt = 1UL << order; mod_node_page_state(folio_pgdat(folio), NR_IOMMU_PAGES, pgcnt); lruvec_stat_mod_folio(folio, NR_SECONDARY_PAGETABLE, pgcnt); return folio_address(folio); } -EXPORT_SYMBOL_GPL(iommu_alloc_pages_node); +EXPORT_SYMBOL_GPL(iommu_alloc_pages_node_sz); static void __iommu_free_desc(struct ioptdesc *iopt) { @@ -82,7 +91,7 @@ static void __iommu_free_desc(struct ioptdesc *iopt) * iommu_free_pages - free pages * @virt: virtual address of the page to be freed. * - * The page must have have been allocated by iommu_alloc_pages_node() + * The page must have have been allocated by iommu_alloc_pages_node_sz() */ void iommu_free_pages(void *virt) { @@ -96,7 +105,7 @@ EXPORT_SYMBOL_GPL(iommu_free_pages); * iommu_put_pages_list - free a list of pages. * @list: The list of pages to be freed * - * Frees a list of pages allocated by iommu_alloc_pages_node(). + * Frees a list of pages allocated by iommu_alloc_pages_node_sz(). */ void iommu_put_pages_list(struct iommu_pages_list *list) { diff --git a/drivers/iommu/iommu-pages.h b/drivers/iommu/iommu-pages.h index f4578f252e2580..3c4575d637da6d 100644 --- a/drivers/iommu/iommu-pages.h +++ b/drivers/iommu/iommu-pages.h @@ -46,14 +46,14 @@ static inline struct ioptdesc *virt_to_ioptdesc(void *virt) return folio_ioptdesc(virt_to_folio(virt)); } -void *iommu_alloc_pages_node(int nid, gfp_t gfp, unsigned int order); +void *iommu_alloc_pages_node_sz(int nid, gfp_t gfp, size_t size); void iommu_free_pages(void *virt); void iommu_put_pages_list(struct iommu_pages_list *list); /** * iommu_pages_list_add - add the page to a iommu_pages_list * @list: List to add the page to - * @virt: Address returned from iommu_alloc_pages_node() + * @virt: Address returned from iommu_alloc_pages_node_sz() */ static inline void iommu_pages_list_add(struct iommu_pages_list *list, void *virt) @@ -84,16 +84,48 @@ static inline bool iommu_pages_list_empty(struct iommu_pages_list *list) return list_empty(&list->pages); } +/** + * iommu_alloc_pages_node - Allocate a zeroed page of a given order from + * specific NUMA node + * @nid: memory NUMA node id + * @gfp: buddy allocator flags + * @order: page order + * + * Returns the virtual address of the allocated page. + * Prefer to use iommu_alloc_pages_node_lg2() + */ +static inline void *iommu_alloc_pages_node(int nid, gfp_t gfp, + unsigned int order) +{ + return iommu_alloc_pages_node_sz(nid, gfp, 1 << (order + PAGE_SHIFT)); +} + /** * iommu_alloc_pages - allocate a zeroed page of a given order * @gfp: buddy allocator flags * @order: page order * * returns the virtual address of the allocated page + * Prefer to use iommu_alloc_pages_lg2() */ static inline void *iommu_alloc_pages(gfp_t gfp, int order) { - return iommu_alloc_pages_node(NUMA_NO_NODE, gfp, order); + return iommu_alloc_pages_node_sz(NUMA_NO_NODE, gfp, + 1 << (order + PAGE_SHIFT)); +} + +/** + * iommu_alloc_pages_sz - Allocate a zeroed page of a given size from + * specific NUMA node + * @nid: memory NUMA node id + * @gfp: buddy allocator flags + * @size: Memory size to allocate, this is rounded up to a power of 2 + * + * Returns the virtual address of the allocated page. + */ +static inline void *iommu_alloc_pages_sz(gfp_t gfp, size_t size) +{ + return iommu_alloc_pages_node_sz(NUMA_NO_NODE, gfp, size); } /** @@ -102,10 +134,11 @@ static inline void *iommu_alloc_pages(gfp_t gfp, int order) * @gfp: buddy allocator flags * * returns the virtual address of the allocated page + * Prefer to use iommu_alloc_pages_node_lg2() */ static inline void *iommu_alloc_page_node(int nid, gfp_t gfp) { - return iommu_alloc_pages_node(nid, gfp, 0); + return iommu_alloc_pages_node_sz(nid, gfp, PAGE_SIZE); } /** @@ -113,10 +146,11 @@ static inline void *iommu_alloc_page_node(int nid, gfp_t gfp) * @gfp: buddy allocator flags * * returns the virtual address of the allocated page + * Prefer to use iommu_alloc_pages_lg2() */ static inline void *iommu_alloc_page(gfp_t gfp) { - return iommu_alloc_pages_node(NUMA_NO_NODE, gfp, 0); + return iommu_alloc_pages_node_sz(NUMA_NO_NODE, gfp, PAGE_SIZE); } #endif /* __IOMMU_PAGES_H */ diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 3fb62165db1992..062818b518221f 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -342,9 +342,9 @@ typedef unsigned int ioasid_t; #define IOMMU_DIRTY_NO_CLEAR (1 << 0) /* - * Pages allocated through iommu_alloc_pages_node() can be placed on this list - * using iommu_pages_list_add(). Note: ONLY pages from iommu_alloc_pages_node() - * can be used this way! + * Pages allocated through iommu_alloc_pages_node_sz() can be placed on this + * list using iommu_pages_list_add(). Note: ONLY pages from + * iommu_alloc_pages_node_sz() can be used this way! */ struct iommu_pages_list { struct list_head pages;