From patchwork Tue Mar 5 10:15:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13582012 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B1D0C54E5D for ; Tue, 5 Mar 2024 10:16:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3AA4A940013; Tue, 5 Mar 2024 05:16:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 356A494000F; Tue, 5 Mar 2024 05:16:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 15C22940013; Tue, 5 Mar 2024 05:16:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 0192E94000F for ; Tue, 5 Mar 2024 05:16:26 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id CE7D9160DBA for ; Tue, 5 Mar 2024 10:16:26 +0000 (UTC) X-FDA: 81862580772.06.6E79102 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf11.hostedemail.com (Postfix) with ESMTP id 21CE54000F for ; Tue, 5 Mar 2024 10:16:24 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=qXJ4kuzE; spf=pass (imf11.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709633785; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4ym1mYA2A1wcBt6xWLNgPkj9MHmnOsKdsN3joHsUxhA=; b=QGsNJm4TY25Odjyttybl15wyyvxRXpT5QL9VSl2OJDFfODqd/qVvbW7DbBrjEj8kFIvNOv aP2g2R5dZOUbWSMPoIhqyRJCCk4gsygLwW5eAf181dI5FzYGKQMstEYjU2ctNkH0YIJa10 tSmnxXXLKZ8yFpuVdHeLWiD/FTdtr2E= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709633785; a=rsa-sha256; cv=none; b=BpwNR+XwKfooRbYTfFWsQSiQwd1qD/K65cp2ygpVqlEURKdAfYvVTuEXp3TdHzWgjljo/0 lpZ5Ja3es2UzGYDBt+HcwT584KFIKkoIHEYmaSjlMZwh/+EgDAWP6S5jwnPxCY8HjtDupP ST6gl0wOriaJyzAfP/N6m8N6D9o9lT4= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=qXJ4kuzE; spf=pass (imf11.hostedemail.com: domain of leon@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=leon@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 4CDD0614A0; Tue, 5 Mar 2024 10:16:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2CCB1C433C7; Tue, 5 Mar 2024 10:16:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1709633783; bh=ktX9dWW2cFJcgdIR5D6ZbOLB18GCIDFrg21/P33AIJM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qXJ4kuzEHOpNp6UZHPKJ9BBgqb6aOqWMZjNX8Mr/NyeYNLbWIluFUQs/il5Qm/nbz rKyGKW497bjfduFoZd0wr9iokEFIosoXqEoSp9FYHE5Xp8JwBP0eMkdQRVZkL+6Uww ryPQG/188kpX4HM5ZM0WopQQkatTizmCi7p5dHYxsiDfGbVRvELF0KDSVp5gVbevRn iuQfGnjX1e6Gup0T5r/Y3LWU+2RN3KFhzCkDeTVBPZBoB5S341ai7MgzL+1qV7wvO/ N+O9x3owVU80s6i+zBYKUlhCTukDtgGYcYE4+1ebdg9cXWrQxpzmVk5+1Ra5IET5cB /WlNvWjN00B8g== From: Leon Romanovsky To: Christoph Hellwig , Robin Murphy , Marek Szyprowski , Joerg Roedel , Will Deacon , Jason Gunthorpe , Chaitanya Kulkarni Cc: Leon Romanovsky , Jonathan Corbet , Jens Axboe , Keith Busch , Sagi Grimberg , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?utf-8?b?SsOpcsO0bWUgR2xp?= =?utf-8?b?c3Nl?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, kvm@vger.kernel.org, linux-mm@kvack.org, Bart Van Assche , Damien Le Moal , Amir Goldstein , "josef@toxicpanda.com" , "Martin K. Petersen" , "daniel@iogearbox.net" , Dan Williams , "jack@suse.com" , Zhu Yanjun Subject: [RFC 12/16] vfio/mlx5: Rewrite create mkey flow to allow better code reuse Date: Tue, 5 Mar 2024 12:15:22 +0200 Message-ID: <9366169430357d953e961cd41ae912c5fbd3f568.1709631413.git.leon@kernel.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Queue-Id: 21CE54000F X-Rspam-User: X-Stat-Signature: n36gxfydfoqqxwyhxhqnn4hwfsab3wnr X-Rspamd-Server: rspam03 X-HE-Tag: 1709633784-991825 X-HE-Meta: U2FsdGVkX180cWANo8oyqYyV19+gNqMdM/k4MJOGGd/ehseAlO1D2x8lKJECdqFRWxlSoiax4IIxYF+bjAokh1gpUX3TMjv7eT7ziSt68GqiLSR7oB+7tfVqV1mnPrkWmk3R1iYInUgcToajpWhBj9evp/gE3OJWIzRKyPA79IB1l2v9uHdqemKUksSWcH9wxxAyxtl1RLhdIvJ3LkAf7+lVJ72+4GpsBMrvLTwG7YfTho4HDQbu29hfd+PItzemG+xVDp5m01u6zdA+Dyo/wQqeEM0AwO1dlnmP+I3gA90jBuxd5ykhwoKJH1Kg/eLY3lf8E6AhnnTrxSKlMiv6T6eaygxQi5mJOGQmY20BQyQyMQzVRdVHCa0E1oRDteS7KpucR9txeVbEpNN5MnxDRj6IxtYlkvwFiw09ltok2Ha45qKiEBjysad43uzkotwax2MoOmf6yk15ui8i9IVHPmyuItj8r2AFLP/T8ca0pQc0MQG9q4kvpi6nSACjp/Qb3uDJnTQNJOwylpmUA4FtTuzywOM6nVOurwV7wz6M+GFDBjq0e2mOJjod1Fqd/AxIl4GzYCcMLl67Vlfqw0d1jMCygG7S4eqBZPkgBJM8v9nF+bmv1z3ZO8rAl+1im07xYSKS/j2qkiWwys+G4gPst6uTZLFfCxQkGw6N3JofWzq28LgXH3dtWkcKEJSdYN1WNHkjXEk2JhOYApJJTPerL0HXs7twIN0LOLkL4nduvRTQDajGVK6LOxtFiTzii/JgvQOM/B5gvXaU7B6ZgHXCwU9FZnEGmgOdr4tsNyzjTcIJi1n6Z0B4kGtEsophbZofqaBTpT0I7Hy+HyhDjTk5cDST3jSYxVSLoBnFUmz/VPOK2lbCtDDk4fl4LMK1AFSj1uayhniIiviLMt+GT/n5emt30vIckEQGYGazdWKhlcua0BkRSVJol8TZfsai9WJY/aurB8Jg0dNZukm8Tfl NM9rrobs DNyha8p0bmjimMB6sOvtjI+Y94Otww+ShkQJlV0ti1juLqs06W3DB8yG68g5L6oUuG9d3/3j/VD4M0Vk6GISgC3+ykhKKdGaVsnbve9TU2RmV+1M8G5Rno9zJYGroKkRQ86APFtveBOVcCAeCWNqK0yrq7CGZqrdRDYVboctZnO8e0FbmfzGAGJLgW5DvvDuIMyy5gAzNPmgBrWvf8UsZU1GVnWkM7mugqGIBDBzafXpkxcbhU03ezMR60yeC5WJvIHKZa6/SibCmHDowNM74Ym/5UZPeyjTbEn2E X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Leon Romanovsky Change the creation of mkey to be performed in multiple steps: data allocation, DMA setup and actual call to HW to create that mkey. In this new flow, the whole input to MKEY command is saved to eliminate the need to keep array of pointers for DMA addresses for receive list and in the future patches for send list too. In addition to memory size reduce and elimination of unnecessary data movements to set MKEY input, the code is prepared for future reuse. Signed-off-by: Leon Romanovsky --- drivers/vfio/pci/mlx5/cmd.c | 149 +++++++++++++++++++++--------------- drivers/vfio/pci/mlx5/cmd.h | 3 +- 2 files changed, 88 insertions(+), 64 deletions(-) diff --git a/drivers/vfio/pci/mlx5/cmd.c b/drivers/vfio/pci/mlx5/cmd.c index 45104e47b7b2..44762980fcb9 100644 --- a/drivers/vfio/pci/mlx5/cmd.c +++ b/drivers/vfio/pci/mlx5/cmd.c @@ -300,39 +300,21 @@ static int mlx5vf_cmd_get_vhca_id(struct mlx5_core_dev *mdev, u16 function_id, return ret; } -static int _create_mkey(struct mlx5_core_dev *mdev, u32 pdn, - struct mlx5_vhca_data_buffer *buf, - struct mlx5_vhca_recv_buf *recv_buf, - u32 *mkey) +static u32 *alloc_mkey_in(u32 npages, u32 pdn) { - size_t npages = buf ? buf->npages : recv_buf->npages; - int err = 0, inlen; - __be64 *mtt; + int inlen; void *mkc; u32 *in; inlen = MLX5_ST_SZ_BYTES(create_mkey_in) + - sizeof(*mtt) * round_up(npages, 2); + sizeof(__be64) * round_up(npages, 2); - in = kvzalloc(inlen, GFP_KERNEL); + in = kvzalloc(inlen, GFP_KERNEL_ACCOUNT); if (!in) - return -ENOMEM; + return NULL; MLX5_SET(create_mkey_in, in, translations_octword_actual_size, DIV_ROUND_UP(npages, 2)); - mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, in, klm_pas_mtt); - - if (buf) { - struct sg_dma_page_iter dma_iter; - - for_each_sgtable_dma_page(&buf->table.sgt, &dma_iter, 0) - *mtt++ = cpu_to_be64(sg_page_iter_dma_address(&dma_iter)); - } else { - int i; - - for (i = 0; i < npages; i++) - *mtt++ = cpu_to_be64(recv_buf->dma_addrs[i]); - } mkc = MLX5_ADDR_OF(create_mkey_in, in, memory_key_mkey_entry); MLX5_SET(mkc, mkc, access_mode_1_0, MLX5_MKC_ACCESS_MODE_MTT); @@ -346,9 +328,30 @@ static int _create_mkey(struct mlx5_core_dev *mdev, u32 pdn, MLX5_SET(mkc, mkc, log_page_size, PAGE_SHIFT); MLX5_SET(mkc, mkc, translations_octword_size, DIV_ROUND_UP(npages, 2)); MLX5_SET64(mkc, mkc, len, npages * PAGE_SIZE); - err = mlx5_core_create_mkey(mdev, mkey, in, inlen); - kvfree(in); - return err; + + return in; +} + +static int create_mkey(struct mlx5_core_dev *mdev, u32 npages, + struct mlx5_vhca_data_buffer *buf, u32 *mkey_in, + u32 *mkey) +{ + __be64 *mtt; + int inlen; + + mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, klm_pas_mtt); + + if (buf) { + struct sg_dma_page_iter dma_iter; + + for_each_sgtable_dma_page(&buf->table.sgt, &dma_iter, 0) + *mtt++ = cpu_to_be64(sg_page_iter_dma_address(&dma_iter)); + } + + inlen = MLX5_ST_SZ_BYTES(create_mkey_in) + + sizeof(__be64) * round_up(npages, 2); + + return mlx5_core_create_mkey(mdev, mkey, mkey_in, inlen); } static int mlx5vf_dma_data_buffer(struct mlx5_vhca_data_buffer *buf) @@ -368,13 +371,22 @@ static int mlx5vf_dma_data_buffer(struct mlx5_vhca_data_buffer *buf) if (ret) return ret; - ret = _create_mkey(mdev, buf->migf->pdn, buf, NULL, &buf->mkey); - if (ret) + buf->mkey_in = alloc_mkey_in(buf->npages, buf->migf->pdn); + if (!buf->mkey_in) { + ret = -ENOMEM; goto err; + } + + ret = create_mkey(mdev, buf->npages, buf, buf->mkey_in, &buf->mkey); + if (ret) + goto err_create_mkey; buf->dmaed = true; return 0; + +err_create_mkey: + kvfree(buf->mkey_in); err: dma_unmap_sgtable(mdev->device, &buf->table.sgt, buf->dma_dir, 0); return ret; @@ -390,6 +402,7 @@ void mlx5vf_free_data_buffer(struct mlx5_vhca_data_buffer *buf) if (buf->dmaed) { mlx5_core_destroy_mkey(migf->mvdev->mdev, buf->mkey); + kvfree(buf->mkey_in); dma_unmap_sgtable(migf->mvdev->mdev->device, &buf->table.sgt, buf->dma_dir, 0); } @@ -1286,46 +1299,45 @@ static int alloc_recv_pages(struct mlx5_vhca_recv_buf *recv_buf, return -ENOMEM; } -static int register_dma_recv_pages(struct mlx5_core_dev *mdev, - struct mlx5_vhca_recv_buf *recv_buf) +static void unregister_dma_pages(struct mlx5_core_dev *mdev, u32 npages, + u32 *mkey_in) { - int i, j; + dma_addr_t addr; + __be64 *mtt; + int i; - recv_buf->dma_addrs = kvcalloc(recv_buf->npages, - sizeof(*recv_buf->dma_addrs), - GFP_KERNEL_ACCOUNT); - if (!recv_buf->dma_addrs) - return -ENOMEM; + mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, klm_pas_mtt); - for (i = 0; i < recv_buf->npages; i++) { - recv_buf->dma_addrs[i] = dma_map_page(mdev->device, - recv_buf->page_list[i], - 0, PAGE_SIZE, - DMA_FROM_DEVICE); - if (dma_mapping_error(mdev->device, recv_buf->dma_addrs[i])) - goto error; + for (i = npages - 1; i >= 0; i--) { + addr = be64_to_cpu(mtt[i]); + dma_unmap_single(mdev->device, addr, PAGE_SIZE, + DMA_FROM_DEVICE); } - return 0; - -error: - for (j = 0; j < i; j++) - dma_unmap_single(mdev->device, recv_buf->dma_addrs[j], - PAGE_SIZE, DMA_FROM_DEVICE); - - kvfree(recv_buf->dma_addrs); - return -ENOMEM; } -static void unregister_dma_recv_pages(struct mlx5_core_dev *mdev, - struct mlx5_vhca_recv_buf *recv_buf) +static int register_dma_pages(struct mlx5_core_dev *mdev, u32 npages, + struct page **page_list, u32 *mkey_in) { + dma_addr_t addr; + __be64 *mtt; int i; - for (i = 0; i < recv_buf->npages; i++) - dma_unmap_single(mdev->device, recv_buf->dma_addrs[i], - PAGE_SIZE, DMA_FROM_DEVICE); + mtt = (__be64 *)MLX5_ADDR_OF(create_mkey_in, mkey_in, klm_pas_mtt); + + for (i = 0; i < npages; i++) { + addr = dma_map_page(mdev->device, page_list[i], 0, PAGE_SIZE, + DMA_FROM_DEVICE); + if (dma_mapping_error(mdev->device, addr)) + goto error; + + *mtt++ = cpu_to_be64(addr); + } + + return 0; - kvfree(recv_buf->dma_addrs); +error: + unregister_dma_pages(mdev, i, mkey_in); + return -ENOMEM; } static void mlx5vf_free_qp_recv_resources(struct mlx5_core_dev *mdev, @@ -1334,7 +1346,8 @@ static void mlx5vf_free_qp_recv_resources(struct mlx5_core_dev *mdev, struct mlx5_vhca_recv_buf *recv_buf = &qp->recv_buf; mlx5_core_destroy_mkey(mdev, recv_buf->mkey); - unregister_dma_recv_pages(mdev, recv_buf); + unregister_dma_pages(mdev, recv_buf->npages, recv_buf->mkey_in); + kvfree(recv_buf->mkey_in); free_recv_pages(&qp->recv_buf); } @@ -1350,18 +1363,28 @@ static int mlx5vf_alloc_qp_recv_resources(struct mlx5_core_dev *mdev, if (err < 0) return err; - err = register_dma_recv_pages(mdev, recv_buf); - if (err) + recv_buf->mkey_in = alloc_mkey_in(npages, pdn); + if (!recv_buf->mkey_in) { + err = -ENOMEM; goto end; + } + + err = register_dma_pages(mdev, npages, recv_buf->page_list, + recv_buf->mkey_in); + if (err) + goto err_register_dma; - err = _create_mkey(mdev, pdn, NULL, recv_buf, &recv_buf->mkey); + err = create_mkey(mdev, npages, NULL, recv_buf->mkey_in, + &recv_buf->mkey); if (err) goto err_create_mkey; return 0; err_create_mkey: - unregister_dma_recv_pages(mdev, recv_buf); + unregister_dma_pages(mdev, npages, recv_buf->mkey_in); +err_register_dma: + kvfree(recv_buf->mkey_in); end: free_recv_pages(recv_buf); return err; diff --git a/drivers/vfio/pci/mlx5/cmd.h b/drivers/vfio/pci/mlx5/cmd.h index 887267ebbd8a..83728c0669e7 100644 --- a/drivers/vfio/pci/mlx5/cmd.h +++ b/drivers/vfio/pci/mlx5/cmd.h @@ -62,6 +62,7 @@ struct mlx5_vhca_data_buffer { u64 length; u32 npages; u32 mkey; + u32 *mkey_in; enum dma_data_direction dma_dir; u8 dmaed:1; u8 stop_copy_chunk_num; @@ -137,8 +138,8 @@ struct mlx5_vhca_cq { struct mlx5_vhca_recv_buf { u32 npages; struct page **page_list; - dma_addr_t *dma_addrs; u32 next_rq_offset; + u32 *mkey_in; u32 mkey; };