From patchwork Mon May 13 12:53:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Haakon Bugge X-Patchwork-Id: 13663439 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7DECC14C5A3; Mon, 13 May 2024 12:58:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.165.32 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715605132; cv=none; b=j3ESkq/yYZ/mWgHUsityUreLzsAp0gfvTdZ9mh6C0ztM58jtumS6fFJzrfHRTRu/4+p5LBemEzIblYxpiMBfP+H17KXmq4OoDPzdna1oD1PMyu8+ID4CbzWJffF75RAXTSoLf8dZEhCW104h5m0Ptma/QpaJFeJLYAlnPSdUwnE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715605132; c=relaxed/simple; bh=YT9wwfG+wdkhnAd6v+F0sHdypvaTZEDvZnNbsFEtQ0s=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version:Content-Type; b=W6W2GJyUg8aWL28TUl5vU2JfAMwUowcfkqigXXdOCMGryTroU2bHDyvl/r669LbyVFno1ir6PEaNZwOKT63yIexOrvgYi2T7H+/ik654X+iYfkN6Aacs27Dhdty6xE7tTOdsbJOkpxt1LNmChA6EestSbAuXvdBHhOEaFlvw+PM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=oracle.com; spf=pass smtp.mailfrom=oracle.com; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b=ioqq/Lpc; arc=none smtp.client-ip=205.220.165.32 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=oracle.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oracle.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="ioqq/Lpc" Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 44DCpFYk029515; Mon, 13 May 2024 12:58:38 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type : content-transfer-encoding; s=corp-2023-11-20; bh=TAN6EnNKa5SiVHoMPwmSnXIjwMGTsHYhmGHhloxSYgM=; b=ioqq/LpcwdjTl3S/YSAfiJWgkZg0tB0wKLBOVNtnXs/GG4QMzZHJs2vJuJms46DbetK+ MbFo41cYw2/MvkSUCXT5vIuywOSo4XawhuRWU8wmk9Qajso+QTIs+tbsD2pW3A5lAI+o 6uTL70CC64qpipbdl+xivNzpd9VHJIWmq4/28pGVLaj2yidGcSoIdZaynvQU3k7t1z8c BXsieVrX5wC1y+9fZiwCGPALX7FViOj5A9Wlh5sNdg88tIqamZz1VQbrjagD/2Fi2u8K Hw1wJqY0RLql2XO3xmumF0l4s/rtg3JJ8ZYmRHlPu2uPvxUq9lkQem1Y8YVzSHM5Ctr1 qQ== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3y3h4vg832-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 13 May 2024 12:58:37 +0000 Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.19/8.17.1.19) with ESMTP id 44DB9vCv018048; Mon, 13 May 2024 12:54:31 GMT Received: from pps.reinject (localhost [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 3y1y4c13cf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 13 May 2024 12:54:31 +0000 Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 44DCq98B001819; Mon, 13 May 2024 12:54:30 GMT Received: from lab61.no.oracle.com (lab61.no.oracle.com [10.172.144.82]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTP id 3y1y4c12sj-6; Mon, 13 May 2024 12:54:30 +0000 From: =?utf-8?q?H=C3=A5kon_Bugge?= To: linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, rds-devel@oss.oracle.com Cc: Jason Gunthorpe , Leon Romanovsky , Saeed Mahameed , Tariq Toukan , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Tejun Heo , Lai Jiangshan , Allison Henderson , Manjunath Patil , Mark Zhang , =?utf-8?q?H=C3=A5kon_Bugge?= , Chuck Lever , Shiraz Saleem , Yang Li Subject: [PATCH 5/6] RDMA/mlx5: Brute force GFP_NOIO Date: Mon, 13 May 2024 14:53:45 +0200 Message-Id: <20240513125346.764076-6-haakon.bugge@oracle.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20240513125346.764076-1-haakon.bugge@oracle.com> References: <20240513125346.764076-1-haakon.bugge@oracle.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.650,FMLib:17.11.176.26 definitions=2024-05-13_08,2024-05-10_02,2023-05-22_02 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0 mlxlogscore=999 adultscore=0 mlxscore=0 spamscore=0 suspectscore=0 phishscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2405010000 definitions=main-2405130082 X-Proofpoint-GUID: TOhy9m6XdMDfEuxZjYPJolG0KLnVJM-D X-Proofpoint-ORIG-GUID: TOhy9m6XdMDfEuxZjYPJolG0KLnVJM-D In mlx5_ib_init(), we call memalloc_noio_{save,restore} in a parenthetic fashion when enabled by the module parameter force_noio. This in order to conditionally enable mlx5_ib to work aligned with I/O devices. Any work queued later on work-queues created during module initialization will inherit the PF_MEMALLOC_{NOIO,NOFS} flag(s), due to commit ("workqueue: Inherit NOIO and NOFS alloc flags"). We do this in order to enable ULPs using the RDMA stack and the mlx5_ib driver to be used as a network block I/O device. This to support a filesystem on top of a raw block device which uses said ULP(s) and the RDMA stack as the network transport layer. Under intense memory pressure, we get memory reclaims. Assume the filesystem reclaims memory, goes to the raw block device, which calls into the ULP in question, which calls the RDMA stack. Now, if regular GFP_KERNEL allocations in ULP or the RDMA stack require reclaims to be fulfilled, we end up in a circular dependency. We break this circular dependency by: 1. Force all allocations in the ULP and the relevant RDMA stack to use GFP_NOIO, by means of a parenthetic use of memalloc_noio_{save,restore} on all relevant entry points. 2. Make sure work-queues inherits current->flags wrt. PF_MEMALLOC_{NOIO,NOFS}, such that work executed on the work-queue inherits the same flag(s). Signed-off-by: HÃ¥kon Bugge --- drivers/infiniband/hw/mlx5/main.c | 22 ++++++++++++++++++---- 1 file changed, 18 insertions(+), 4 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index c2b557e642906..a424d518538ed 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -56,6 +56,10 @@ MODULE_AUTHOR("Eli Cohen "); MODULE_DESCRIPTION("Mellanox 5th generation network adapters (ConnectX series) IB driver"); MODULE_LICENSE("Dual BSD/GPL"); +static bool mlx5_ib_force_noio; +module_param_named(force_noio, mlx5_ib_force_noio, bool, 0444); +MODULE_PARM_DESC(force_noio, "Force the use of GFP_NOIO (Y/N)"); + struct mlx5_ib_event_work { struct work_struct work; union { @@ -4489,16 +4493,23 @@ static struct auxiliary_driver mlx5r_driver = { static int __init mlx5_ib_init(void) { + unsigned int noio_flags; int ret; + if (mlx5_ib_force_noio) + noio_flags = memalloc_noio_save(); + xlt_emergency_page = (void *)__get_free_page(GFP_KERNEL); - if (!xlt_emergency_page) - return -ENOMEM; + if (!xlt_emergency_page) { + ret = -ENOMEM; + goto out; + } mlx5_ib_event_wq = alloc_ordered_workqueue("mlx5_ib_event_wq", 0); if (!mlx5_ib_event_wq) { free_page((unsigned long)xlt_emergency_page); - return -ENOMEM; + ret = -ENOMEM; + goto out; } ret = mlx5_ib_qp_event_init(); @@ -4515,7 +4526,7 @@ static int __init mlx5_ib_init(void) ret = auxiliary_driver_register(&mlx5r_driver); if (ret) goto drv_err; - return 0; + goto out; drv_err: auxiliary_driver_unregister(&mlx5r_mp_driver); @@ -4526,6 +4537,9 @@ static int __init mlx5_ib_init(void) qp_event_err: destroy_workqueue(mlx5_ib_event_wq); free_page((unsigned long)xlt_emergency_page); +out: + if (mlx5_ib_force_noio) + memalloc_noio_restore(noio_flags); return ret; }