From patchwork Thu Mar 2 11:34:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Garzarella X-Patchwork-Id: 13157129 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4BF0DC7EE33 for ; Thu, 2 Mar 2023 11:35:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229876AbjCBLfi (ORCPT ); Thu, 2 Mar 2023 06:35:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55102 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229806AbjCBLf1 (ORCPT ); Thu, 2 Mar 2023 06:35:27 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6CC715C89 for ; Thu, 2 Mar 2023 03:34:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1677756873; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zd9PLJS6vpKJ6CPmR9YHtESYhzkUvkwsWDxsXaqlwQ8=; b=KBPxhpEZ/FO1f2jdraRNtjAM4oXlKEWSC3qkaxMTzF7kXTUXp+a3WEs+v8R7BbU78Xd+iG xBzhuoAk3eU3W4ksdhs9KDiqIzekmxvaLOJpnZPF90MXrTwwB8aHC/OMa64QyhMlffI+xv s7JHfdWcK3NfP42WHzABqgl6tCT/bRA= Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com [209.85.160.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-261-20dg9s9gMxexhOkhCmbqhw-1; Thu, 02 Mar 2023 06:34:32 -0500 X-MC-Unique: 20dg9s9gMxexhOkhCmbqhw-1 Received: by mail-qt1-f197.google.com with SMTP id c11-20020ac85a8b000000b003bfdd43ac76so6260302qtc.5 for ; Thu, 02 Mar 2023 03:34:32 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1677756871; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zd9PLJS6vpKJ6CPmR9YHtESYhzkUvkwsWDxsXaqlwQ8=; b=5CiT/9jADQs5EQSi/mVodWoNy4MhR+xNqmhl40cbw9MqzY9J742xifBb1Hsxl3SX4e GnmvgMcx98E8631jIbew/F45sVeYNlbAyz+BiC8r8A13deL4SdooWmMJOCcbNWyTSHH+ 0IeSrs2Lkh1OogK28vdOfpDV0gVpLiYebWEvnIGof8k8lZMhLu/xOmoizKwQDYO0VumI oGVq+jarkmgAyLDUOvoxd5GHTFZ3nne+4Ei7mwYS3aqw/0I299vO7udNABViW3Bm4ymk B4m1ybiXvvZ0n3vFPwrYhvotD0o5obKtxEBzq4KM5ZHBsFrqVBYgS6O+ssd/ylTWJOXZ rfZA== X-Gm-Message-State: AO0yUKUdIJlxOffvchCN2hk7/+4yyZv0aRI8J5QtXVfdkqGmW0748NDp y8212FBY9tkWrxLcm7nBebffrUmbZEcLY9BlVYvRUl2xa2gXdUe0ydyUZHKZa+2qzk5iqbYcG/T TrfHag6UBgz/z X-Received: by 2002:a05:622a:14a:b0:3bf:d0b1:433d with SMTP id v10-20020a05622a014a00b003bfd0b1433dmr17172915qtw.60.1677756871625; Thu, 02 Mar 2023 03:34:31 -0800 (PST) X-Google-Smtp-Source: AK7set+LNHqJ8iSXkQnBxJNED/Dd+UhTVPKK7gC7ULHZoFWAiCyFuYmT1oYs5XX8i5qwnVYajblszQ== X-Received: by 2002:a05:622a:14a:b0:3bf:d0b1:433d with SMTP id v10-20020a05622a014a00b003bfd0b1433dmr17172896qtw.60.1677756871371; Thu, 02 Mar 2023 03:34:31 -0800 (PST) Received: from step1.redhat.com (c-115-213.cust-q.wadsl.it. [212.43.115.213]) by smtp.gmail.com with ESMTPSA id o12-20020ac8698c000000b003ba19e53e43sm10084156qtq.25.2023.03.02.03.34.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Mar 2023 03:34:30 -0800 (PST) From: Stefano Garzarella To: virtualization@lists.linux-foundation.org Cc: Andrey Zhadchenko , eperezma@redhat.com, netdev@vger.kernel.org, stefanha@redhat.com, linux-kernel@vger.kernel.org, Jason Wang , "Michael S. Tsirkin" , kvm@vger.kernel.org, Stefano Garzarella Subject: [PATCH v2 1/8] vdpa: add bind_mm/unbind_mm callbacks Date: Thu, 2 Mar 2023 12:34:14 +0100 Message-Id: <20230302113421.174582-2-sgarzare@redhat.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230302113421.174582-1-sgarzare@redhat.com> References: <20230302113421.174582-1-sgarzare@redhat.com> MIME-Version: 1.0 Content-type: text/plain Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org These new optional callbacks is used to bind/unbind the device to a specific address space so the vDPA framework can use VA when these callbacks are implemented. Suggested-by: Jason Wang Signed-off-by: Stefano Garzarella --- Notes: v2: - removed `struct task_struct *owner` param (unused for now, maybe useful to support cgroups) [Jason] - add unbind_mm callback [Jason] include/linux/vdpa.h | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h index 43f59ef10cc9..369c21394284 100644 --- a/include/linux/vdpa.h +++ b/include/linux/vdpa.h @@ -290,6 +290,14 @@ struct vdpa_map_file { * @vdev: vdpa device * @idx: virtqueue index * Returns pointer to structure device or error (NULL) + * @bind_mm: Bind the device to a specific address space + * so the vDPA framework can use VA when this + * callback is implemented. (optional) + * @vdev: vdpa device + * @mm: address space to bind + * @unbind_mm: Unbind the device from the address space + * bound using the bind_mm callback. (optional) + * @vdev: vdpa device * @free: Free resources that belongs to vDPA (optional) * @vdev: vdpa device */ @@ -351,6 +359,8 @@ struct vdpa_config_ops { int (*set_group_asid)(struct vdpa_device *vdev, unsigned int group, unsigned int asid); struct device *(*get_vq_dma_dev)(struct vdpa_device *vdev, u16 idx); + int (*bind_mm)(struct vdpa_device *vdev, struct mm_struct *mm); + void (*unbind_mm)(struct vdpa_device *vdev); /* Free device resources */ void (*free)(struct vdpa_device *vdev); From patchwork Thu Mar 2 11:34:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Garzarella X-Patchwork-Id: 13157131 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4AFA5C7EE30 for ; Thu, 2 Mar 2023 11:35:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229918AbjCBLfn (ORCPT ); Thu, 2 Mar 2023 06:35:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55032 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229509AbjCBLf2 (ORCPT ); Thu, 2 Mar 2023 06:35:28 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8A05112F24 for ; Thu, 2 Mar 2023 03:34:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1677756876; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iYIRvQhreJmamPTgF10vjkMAMOhZQaOaX6sUzVAfpfw=; b=hQManXAScAKWA8mgFefT9Uv+XU0ZH8HiL2J6k7qE72pVd/cT+KFy5MDd3ttUqAL4aNRLM9 OHLmtGUnwE8v1YrbAdkK4dILv8AvlxcTjr2i2Lqlgkl4+G0zR09KWgDTWH2OTyBoI+fcuL u+KYgUX7mWzqoyosTbdovsMBY49gLdQ= Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-459-8rS0HF14MdW2m4aHw6tbnA-1; Thu, 02 Mar 2023 06:34:35 -0500 X-MC-Unique: 8rS0HF14MdW2m4aHw6tbnA-1 Received: by mail-qk1-f197.google.com with SMTP id ou5-20020a05620a620500b007423e532628so9792695qkn.5 for ; Thu, 02 Mar 2023 03:34:35 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1677756875; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=iYIRvQhreJmamPTgF10vjkMAMOhZQaOaX6sUzVAfpfw=; b=hj5k5ANLwaRgCFd9VqPltBhp6FFGFlA+2tfGdM9nNMkpmrCPYijwE4nbF7XVqknWqQ wF5MBMZU2CYoyX/EYZPt8CI224ImoT8FFm3w+QyofJbTAqH66TQS9OfE39miymYmHG9/ AbUXLIsByOKNdTGwqYub2sDyfoDtJVOwrwLtIhlaWG4w4E+WRpXUnYH9/PpHqRuGD4Na SVHllHJ64B5e4r0MdwP+xO7nC6f6kbVC0aQqnhh0g97rcLL3j3uqdhDvhZER/Vk7J32/ tucfXznzyjVxwOc+LsHTWmysZO9OBroqZeJ2/8XXu7crK+fWitH/Ofyh4E+mcTTykOSf xjQQ== X-Gm-Message-State: AO0yUKUDYOSdPrWs/re31KVqMfJ2ZUdLrI9viW+qRSFjHF/Q1uUnIoBV 2ljrP+LM0J7Y+CFF6F50KGX5h9Ixl2qttklx7R8CmdNfgS2VcvWxynbUAbOv9XzSsDQgmpxi0LU BBkeYaYdT8ehZ X-Received: by 2002:a05:622a:186:b0:3bf:d4c3:365d with SMTP id s6-20020a05622a018600b003bfd4c3365dmr17450758qtw.14.1677756875321; Thu, 02 Mar 2023 03:34:35 -0800 (PST) X-Google-Smtp-Source: AK7set+DEB84vy51k02CcK+JjytuJL+amad7fRxImnJYIv4nCRuUv0xkGiPSLgQZT6Jrlf+7iGe6iA== X-Received: by 2002:a05:622a:186:b0:3bf:d4c3:365d with SMTP id s6-20020a05622a018600b003bfd4c3365dmr17450741qtw.14.1677756875094; Thu, 02 Mar 2023 03:34:35 -0800 (PST) Received: from step1.redhat.com (c-115-213.cust-q.wadsl.it. [212.43.115.213]) by smtp.gmail.com with ESMTPSA id o12-20020ac8698c000000b003ba19e53e43sm10084156qtq.25.2023.03.02.03.34.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Mar 2023 03:34:34 -0800 (PST) From: Stefano Garzarella To: virtualization@lists.linux-foundation.org Cc: Andrey Zhadchenko , eperezma@redhat.com, netdev@vger.kernel.org, stefanha@redhat.com, linux-kernel@vger.kernel.org, Jason Wang , "Michael S. Tsirkin" , kvm@vger.kernel.org, Stefano Garzarella Subject: [PATCH v2 2/8] vhost-vdpa: use bind_mm/unbind_mm device callbacks Date: Thu, 2 Mar 2023 12:34:15 +0100 Message-Id: <20230302113421.174582-3-sgarzare@redhat.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230302113421.174582-1-sgarzare@redhat.com> References: <20230302113421.174582-1-sgarzare@redhat.com> MIME-Version: 1.0 Content-type: text/plain Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When the user call VHOST_SET_OWNER ioctl and the vDPA device has `use_va` set to true, let's call the bind_mm callback. In this way we can bind the device to the user address space and directly use the user VA. The unbind_mm callback is called during the release after stopping the device. Signed-off-by: Stefano Garzarella --- Notes: v2: - call the new unbind_mm callback during the release [Jason] - avoid to call bind_mm callback after the reset, since the device is not detaching it now during the reset drivers/vhost/vdpa.c | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c index dc12dbd5b43b..1ab89fccd825 100644 --- a/drivers/vhost/vdpa.c +++ b/drivers/vhost/vdpa.c @@ -219,6 +219,28 @@ static int vhost_vdpa_reset(struct vhost_vdpa *v) return vdpa_reset(vdpa); } +static long vhost_vdpa_bind_mm(struct vhost_vdpa *v) +{ + struct vdpa_device *vdpa = v->vdpa; + const struct vdpa_config_ops *ops = vdpa->config; + + if (!vdpa->use_va || !ops->bind_mm) + return 0; + + return ops->bind_mm(vdpa, v->vdev.mm); +} + +static void vhost_vdpa_unbind_mm(struct vhost_vdpa *v) +{ + struct vdpa_device *vdpa = v->vdpa; + const struct vdpa_config_ops *ops = vdpa->config; + + if (!vdpa->use_va || !ops->unbind_mm) + return; + + ops->unbind_mm(vdpa); +} + static long vhost_vdpa_get_device_id(struct vhost_vdpa *v, u8 __user *argp) { struct vdpa_device *vdpa = v->vdpa; @@ -711,6 +733,13 @@ static long vhost_vdpa_unlocked_ioctl(struct file *filep, break; default: r = vhost_dev_ioctl(&v->vdev, cmd, argp); + if (!r && cmd == VHOST_SET_OWNER) { + r = vhost_vdpa_bind_mm(v); + if (r) { + vhost_dev_reset_owner(&v->vdev, NULL); + break; + } + } if (r == -ENOIOCTLCMD) r = vhost_vdpa_vring_ioctl(v, cmd, argp); break; @@ -1285,6 +1314,7 @@ static int vhost_vdpa_release(struct inode *inode, struct file *filep) vhost_vdpa_clean_irq(v); vhost_vdpa_reset(v); vhost_dev_stop(&v->vdev); + vhost_vdpa_unbind_mm(v); vhost_vdpa_free_domain(v); vhost_vdpa_config_put(v); vhost_vdpa_cleanup(v); From patchwork Thu Mar 2 11:34:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Garzarella X-Patchwork-Id: 13157130 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E825DC7EE36 for ; Thu, 2 Mar 2023 11:35:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229889AbjCBLfl (ORCPT ); Thu, 2 Mar 2023 06:35:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55212 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229659AbjCBLf3 (ORCPT ); Thu, 2 Mar 2023 06:35:29 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 92C2F11679 for ; Thu, 2 Mar 2023 03:34:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1677756880; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nMUvg2hEs0iMx2dm3/Jg9b5hv2Rlumo7TQ+47EWr+7I=; b=eqM4MKD4lbhRF16c0egtl+jxhsbdRBQsXRI4OHQkwG+VozXKm3UxGokgCw0Zq5uPLEPCfu NXl5vgLB/aDOCv+Zsp8zMHfkgbP7oZcEWgWf/q/o3GiDfb2GTqItxf4LidNxasFa+rhfYz 2ej4Im32XE4zOHKneIupZ+1GCn7w7rM= Received: from mail-qk1-f198.google.com (mail-qk1-f198.google.com [209.85.222.198]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-648-FgZqM-AoP-ytpHDdEQuZTg-1; Thu, 02 Mar 2023 06:34:39 -0500 X-MC-Unique: FgZqM-AoP-ytpHDdEQuZTg-1 Received: by mail-qk1-f198.google.com with SMTP id s21-20020a05620a0bd500b0074234f33f24so9752513qki.3 for ; Thu, 02 Mar 2023 03:34:39 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1677756879; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=nMUvg2hEs0iMx2dm3/Jg9b5hv2Rlumo7TQ+47EWr+7I=; b=PfGgcUyaMaJ/QpmX5jUPQMDJx57G/XU//oUmGobs+24E6e+ljjJtxlVJiMUPAlK6rz Z/XQq7VYmrATivmZgR1GfOHDNwyNkRLRL+JcQbXeB6mViO03BJUe7kQNKrp9pOLW2xRa 0CeF0MB491MGLjEi/mdLmnH3xokHrXPXCApuk77pj4D2fkyqFQwPZp5Rc2DWHsf8U27+ BpiksdP0K7W/uvEht7dKiOkyKoDFgpiPhcOknnYRhF7dcO/GQ75fEXnLGCN946+BlxyN RuvlljilThjy80v7NbKgNM5ExE25MqsdSeNoEsyhym2D0k0Xu0w7XbrpD75M+MfL+G12 +slg== X-Gm-Message-State: AO0yUKUcXQYHdmxX3PUuAFQXTDddXlJoGDq7fFYN5VYxIKVv6NF3w2kL qcZcCqLZsGEoTChMhX/vWjaIRmXjE8bWQKVq8dr+ibmT+Q0Jo5vFHNZdYw2lKPSMCiaf6ZbHacy ISLdbHqQIx5Dg+xEmbw== X-Received: by 2002:ac8:5850:0:b0:3bf:d6ad:5236 with SMTP id h16-20020ac85850000000b003bfd6ad5236mr15655313qth.32.1677756879136; Thu, 02 Mar 2023 03:34:39 -0800 (PST) X-Google-Smtp-Source: AK7set9TavRztcG0ZSYQlhYEtTPhNvB2VmTMeHABCwp3UyKfmeGcD6URaSgRaErRo454+a/9s5fAlA== X-Received: by 2002:ac8:5850:0:b0:3bf:d6ad:5236 with SMTP id h16-20020ac85850000000b003bfd6ad5236mr15655295qth.32.1677756878884; Thu, 02 Mar 2023 03:34:38 -0800 (PST) Received: from step1.redhat.com (c-115-213.cust-q.wadsl.it. [212.43.115.213]) by smtp.gmail.com with ESMTPSA id o12-20020ac8698c000000b003ba19e53e43sm10084156qtq.25.2023.03.02.03.34.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Mar 2023 03:34:38 -0800 (PST) From: Stefano Garzarella To: virtualization@lists.linux-foundation.org Cc: Andrey Zhadchenko , eperezma@redhat.com, netdev@vger.kernel.org, stefanha@redhat.com, linux-kernel@vger.kernel.org, Jason Wang , "Michael S. Tsirkin" , kvm@vger.kernel.org, Stefano Garzarella Subject: [PATCH v2 3/8] vringh: replace kmap_atomic() with kmap_local_page() Date: Thu, 2 Mar 2023 12:34:16 +0100 Message-Id: <20230302113421.174582-4-sgarzare@redhat.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230302113421.174582-1-sgarzare@redhat.com> References: <20230302113421.174582-1-sgarzare@redhat.com> MIME-Version: 1.0 Content-type: text/plain Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org kmap_atomic() is deprecated in favor of kmap_local_page(). With kmap_local_page() the mappings are per thread, CPU local, can take page-faults, and can be called from any context (including interrupts). Furthermore, the tasks can be preempted and, when they are scheduled to run again, the kernel virtual addresses are restored and still valid. kmap_atomic() is implemented like a kmap_local_page() which also disables page-faults and preemption (the latter only for !PREEMPT_RT kernels, otherwise it only disables migration). The code within the mappings/un-mappings in getu16_iotlb() and putu16_iotlb() don't depend on the above-mentioned side effects of kmap_atomic(), so that mere replacements of the old API with the new one is all that is required (i.e., there is no need to explicitly add calls to pagefault_disable() and/or preempt_disable()). Signed-off-by: Stefano Garzarella --- Notes: v2: - added this patch since checkpatch.pl complained about deprecation of kmap_atomic() touched by next patch drivers/vhost/vringh.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c index a1e27da54481..0ba3ef809e48 100644 --- a/drivers/vhost/vringh.c +++ b/drivers/vhost/vringh.c @@ -1220,10 +1220,10 @@ static inline int getu16_iotlb(const struct vringh *vrh, if (ret < 0) return ret; - kaddr = kmap_atomic(iov.bv_page); + kaddr = kmap_local_page(iov.bv_page); from = kaddr + iov.bv_offset; *val = vringh16_to_cpu(vrh, READ_ONCE(*(__virtio16 *)from)); - kunmap_atomic(kaddr); + kunmap_local(kaddr); return 0; } @@ -1241,10 +1241,10 @@ static inline int putu16_iotlb(const struct vringh *vrh, if (ret < 0) return ret; - kaddr = kmap_atomic(iov.bv_page); + kaddr = kmap_local_page(iov.bv_page); to = kaddr + iov.bv_offset; WRITE_ONCE(*(__virtio16 *)to, cpu_to_vringh16(vrh, val)); - kunmap_atomic(kaddr); + kunmap_local(kaddr); return 0; } From patchwork Thu Mar 2 11:34:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Stefano Garzarella X-Patchwork-Id: 13157132 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2CB6FC7EE33 for ; Thu, 2 Mar 2023 11:36:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230039AbjCBLgN (ORCPT ); Thu, 2 Mar 2023 06:36:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55466 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229836AbjCBLf5 (ORCPT ); Thu, 2 Mar 2023 06:35:57 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B2D05149A5 for ; Thu, 2 Mar 2023 03:35:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1677756907; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Wr7wXnFCQpY1zpniPAgMpMPx/fjhLmfIKoC/NtXm0MU=; b=gLhaycOvw6vqUoNnKCtcpAmh53qPL8wQe088vLH/ZTnmVoGzjNZ1ji2R1mwMiYrAsBLfQV M9edD4F+PEic2OCjUFPgviPO/bOWw72nutGiZrPYasQ6Pa+EFC5DQj4Pc2OOIVWl0ts2ZJ ff2H2eKgbsZ19KnDRzt3HuEmyoA1EUc= Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-349-JhcljOjiPWShjS4e9LCP6Q-1; Thu, 02 Mar 2023 06:35:06 -0500 X-MC-Unique: JhcljOjiPWShjS4e9LCP6Q-1 Received: by mail-qv1-f69.google.com with SMTP id d27-20020a0caa1b000000b00572765a687cso8681139qvb.19 for ; Thu, 02 Mar 2023 03:35:06 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1677756905; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Wr7wXnFCQpY1zpniPAgMpMPx/fjhLmfIKoC/NtXm0MU=; b=iYLFkJQSRqGZd/0CANN1/mBbygOfAMopRfGXfXUEV0xkf2xFAHrbba6pCPmVcVdaAv m26kE07CRVc8J2XGf7wgPrUuU3IJSKHIXOAspx6hC6rGpa3Qj0C4W2JWp8UWUtdBwBwE TyH8L5FuzZF2R0+wDmZ4MhdTs3heiadysTSSZEvaS5SZyHmQBDorBa3uwX08pegVLWkH GoBrP7v16i0+ZsR+VvNuKxpmSkl7SuQ+zT/V5MoQFl0GzxEQHt31oW9sfDsP5K7N6+h1 imG95esZ9NhADUdyXBz23ScfUW7havwRKhlW4VvkWm7AAfkX15LqbY/E5IELw4nCU0Cf IF/A== X-Gm-Message-State: AO0yUKW3ZL4vWgiLbqcQrC16RX1zcyXJacK38xV56bgTDKkY/+3AX/X4 q6lNR80/BlvK5f8YWXrRc+bdUvjzhzb4gNr+7pIx61MNpOeE+WRUTl0ShuRw+dudwmolB6IyUzm OWDmz4L2BIIWr X-Received: by 2002:a05:6214:20c1:b0:56e:a556:f28f with SMTP id 1-20020a05621420c100b0056ea556f28fmr18887335qve.34.1677756905279; Thu, 02 Mar 2023 03:35:05 -0800 (PST) X-Google-Smtp-Source: AK7set86Y3xWX8FFzNBNJULTR0DAk3t0Mhnai0hp6+YthKrk45BUDZ8jdC/DKPHKvqNQR5FHp9fcgw== X-Received: by 2002:a05:6214:20c1:b0:56e:a556:f28f with SMTP id 1-20020a05621420c100b0056ea556f28fmr18887295qve.34.1677756904884; Thu, 02 Mar 2023 03:35:04 -0800 (PST) Received: from step1.redhat.com (c-115-213.cust-q.wadsl.it. [212.43.115.213]) by smtp.gmail.com with ESMTPSA id p12-20020a37420c000000b007426ec97253sm8383058qka.111.2023.03.02.03.35.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Mar 2023 03:35:04 -0800 (PST) From: Stefano Garzarella To: virtualization@lists.linux-foundation.org Cc: Andrey Zhadchenko , eperezma@redhat.com, netdev@vger.kernel.org, stefanha@redhat.com, linux-kernel@vger.kernel.org, Jason Wang , "Michael S. Tsirkin" , kvm@vger.kernel.org, Stefano Garzarella Subject: [PATCH v2 4/8] vringh: support VA with iotlb Date: Thu, 2 Mar 2023 12:34:17 +0100 Message-Id: <20230302113421.174582-5-sgarzare@redhat.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230302113421.174582-1-sgarzare@redhat.com> References: <20230302113421.174582-1-sgarzare@redhat.com> MIME-Version: 1.0 Content-type: text/plain Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org vDPA supports the possibility to use user VA in the iotlb messages. So, let's add support for user VA in vringh to use it in the vDPA simulators. Signed-off-by: Stefano Garzarella Acked-by: Eugenio Pérez Martin --- Notes: v2: - replace kmap_atomic() with kmap_local_page() [see previous patch] - fix cast warnings when build with W=1 C=1 include/linux/vringh.h | 5 +- drivers/vdpa/mlx5/net/mlx5_vnet.c | 2 +- drivers/vdpa/vdpa_sim/vdpa_sim.c | 4 +- drivers/vhost/vringh.c | 247 ++++++++++++++++++++++++------ 4 files changed, 205 insertions(+), 53 deletions(-) diff --git a/include/linux/vringh.h b/include/linux/vringh.h index 1991a02c6431..d39b9f2dcba0 100644 --- a/include/linux/vringh.h +++ b/include/linux/vringh.h @@ -32,6 +32,9 @@ struct vringh { /* Can we get away with weak barriers? */ bool weak_barriers; + /* Use user's VA */ + bool use_va; + /* Last available index we saw (ie. where we're up to). */ u16 last_avail_idx; @@ -279,7 +282,7 @@ void vringh_set_iotlb(struct vringh *vrh, struct vhost_iotlb *iotlb, spinlock_t *iotlb_lock); int vringh_init_iotlb(struct vringh *vrh, u64 features, - unsigned int num, bool weak_barriers, + unsigned int num, bool weak_barriers, bool use_va, struct vring_desc *desc, struct vring_avail *avail, struct vring_used *used); diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c index 3a0e721aef05..babc8dd171a6 100644 --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c @@ -2537,7 +2537,7 @@ static int setup_cvq_vring(struct mlx5_vdpa_dev *mvdev) if (mvdev->actual_features & BIT_ULL(VIRTIO_NET_F_CTRL_VQ)) err = vringh_init_iotlb(&cvq->vring, mvdev->actual_features, - MLX5_CVQ_MAX_ENT, false, + MLX5_CVQ_MAX_ENT, false, false, (struct vring_desc *)(uintptr_t)cvq->desc_addr, (struct vring_avail *)(uintptr_t)cvq->driver_addr, (struct vring_used *)(uintptr_t)cvq->device_addr); diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c index 6a0a65814626..481eb156658b 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c @@ -60,7 +60,7 @@ static void vdpasim_queue_ready(struct vdpasim *vdpasim, unsigned int idx) struct vdpasim_virtqueue *vq = &vdpasim->vqs[idx]; uint16_t last_avail_idx = vq->vring.last_avail_idx; - vringh_init_iotlb(&vq->vring, vdpasim->features, vq->num, true, + vringh_init_iotlb(&vq->vring, vdpasim->features, vq->num, true, false, (struct vring_desc *)(uintptr_t)vq->desc_addr, (struct vring_avail *) (uintptr_t)vq->driver_addr, @@ -81,7 +81,7 @@ static void vdpasim_vq_reset(struct vdpasim *vdpasim, vq->cb = NULL; vq->private = NULL; vringh_init_iotlb(&vq->vring, vdpasim->dev_attr.supported_features, - VDPASIM_QUEUE_MAX, false, NULL, NULL, NULL); + VDPASIM_QUEUE_MAX, false, false, NULL, NULL, NULL); vq->vring.notify = NULL; } diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c index 0ba3ef809e48..61c79cea44ca 100644 --- a/drivers/vhost/vringh.c +++ b/drivers/vhost/vringh.c @@ -1094,15 +1094,99 @@ EXPORT_SYMBOL(vringh_need_notify_kern); #if IS_REACHABLE(CONFIG_VHOST_IOTLB) -static int iotlb_translate(const struct vringh *vrh, - u64 addr, u64 len, u64 *translated, - struct bio_vec iov[], - int iov_size, u32 perm) +static int iotlb_translate_va(const struct vringh *vrh, + u64 addr, u64 len, u64 *translated, + struct iovec iov[], + int iov_size, u32 perm) { struct vhost_iotlb_map *map; struct vhost_iotlb *iotlb = vrh->iotlb; + u64 s = 0, last = addr + len - 1; int ret = 0; + + spin_lock(vrh->iotlb_lock); + + while (len > s) { + u64 size; + + if (unlikely(ret >= iov_size)) { + ret = -ENOBUFS; + break; + } + + map = vhost_iotlb_itree_first(iotlb, addr, last); + if (!map || map->start > addr) { + ret = -EINVAL; + break; + } else if (!(map->perm & perm)) { + ret = -EPERM; + break; + } + + size = map->size - addr + map->start; + iov[ret].iov_len = min(len - s, size); + iov[ret].iov_base = (void __user *)(unsigned long) + (map->addr + addr - map->start); + s += size; + addr += size; + ++ret; + } + + spin_unlock(vrh->iotlb_lock); + + if (translated) + *translated = min(len, s); + + return ret; +} + +static inline int copy_from_va(const struct vringh *vrh, void *dst, void *src, + u64 len, u64 *translated) +{ + struct iovec iov[16]; + struct iov_iter iter; + int ret; + + ret = iotlb_translate_va(vrh, (u64)(uintptr_t)src, len, translated, iov, + ARRAY_SIZE(iov), VHOST_MAP_RO); + if (ret == -ENOBUFS) + ret = ARRAY_SIZE(iov); + else if (ret < 0) + return ret; + + iov_iter_init(&iter, ITER_SOURCE, iov, ret, *translated); + + return copy_from_iter(dst, *translated, &iter); +} + +static inline int copy_to_va(const struct vringh *vrh, void *dst, void *src, + u64 len, u64 *translated) +{ + struct iovec iov[16]; + struct iov_iter iter; + int ret; + + ret = iotlb_translate_va(vrh, (u64)(uintptr_t)dst, len, translated, iov, + ARRAY_SIZE(iov), VHOST_MAP_WO); + if (ret == -ENOBUFS) + ret = ARRAY_SIZE(iov); + else if (ret < 0) + return ret; + + iov_iter_init(&iter, ITER_DEST, iov, ret, *translated); + + return copy_to_iter(src, *translated, &iter); +} + +static int iotlb_translate_pa(const struct vringh *vrh, + u64 addr, u64 len, u64 *translated, + struct bio_vec iov[], + int iov_size, u32 perm) +{ + struct vhost_iotlb_map *map; + struct vhost_iotlb *iotlb = vrh->iotlb; u64 s = 0, last = addr + len - 1; + int ret = 0; spin_lock(vrh->iotlb_lock); @@ -1141,28 +1225,61 @@ static int iotlb_translate(const struct vringh *vrh, return ret; } +static inline int copy_from_pa(const struct vringh *vrh, void *dst, void *src, + u64 len, u64 *translated) +{ + struct bio_vec iov[16]; + struct iov_iter iter; + int ret; + + ret = iotlb_translate_pa(vrh, (u64)(uintptr_t)src, len, translated, iov, + ARRAY_SIZE(iov), VHOST_MAP_RO); + if (ret == -ENOBUFS) + ret = ARRAY_SIZE(iov); + else if (ret < 0) + return ret; + + iov_iter_bvec(&iter, ITER_SOURCE, iov, ret, *translated); + + return copy_from_iter(dst, *translated, &iter); +} + +static inline int copy_to_pa(const struct vringh *vrh, void *dst, void *src, + u64 len, u64 *translated) +{ + struct bio_vec iov[16]; + struct iov_iter iter; + int ret; + + ret = iotlb_translate_pa(vrh, (u64)(uintptr_t)dst, len, translated, iov, + ARRAY_SIZE(iov), VHOST_MAP_WO); + if (ret == -ENOBUFS) + ret = ARRAY_SIZE(iov); + else if (ret < 0) + return ret; + + iov_iter_bvec(&iter, ITER_DEST, iov, ret, *translated); + + return copy_to_iter(src, *translated, &iter); +} + static inline int copy_from_iotlb(const struct vringh *vrh, void *dst, void *src, size_t len) { u64 total_translated = 0; while (total_translated < len) { - struct bio_vec iov[16]; - struct iov_iter iter; u64 translated; int ret; - ret = iotlb_translate(vrh, (u64)(uintptr_t)src, - len - total_translated, &translated, - iov, ARRAY_SIZE(iov), VHOST_MAP_RO); - if (ret == -ENOBUFS) - ret = ARRAY_SIZE(iov); - else if (ret < 0) - return ret; - - iov_iter_bvec(&iter, ITER_SOURCE, iov, ret, translated); + if (vrh->use_va) { + ret = copy_from_va(vrh, dst, src, + len - total_translated, &translated); + } else { + ret = copy_from_pa(vrh, dst, src, + len - total_translated, &translated); + } - ret = copy_from_iter(dst, translated, &iter); if (ret < 0) return ret; @@ -1180,22 +1297,17 @@ static inline int copy_to_iotlb(const struct vringh *vrh, void *dst, u64 total_translated = 0; while (total_translated < len) { - struct bio_vec iov[16]; - struct iov_iter iter; u64 translated; int ret; - ret = iotlb_translate(vrh, (u64)(uintptr_t)dst, - len - total_translated, &translated, - iov, ARRAY_SIZE(iov), VHOST_MAP_WO); - if (ret == -ENOBUFS) - ret = ARRAY_SIZE(iov); - else if (ret < 0) - return ret; - - iov_iter_bvec(&iter, ITER_DEST, iov, ret, translated); + if (vrh->use_va) { + ret = copy_to_va(vrh, dst, src, + len - total_translated, &translated); + } else { + ret = copy_to_pa(vrh, dst, src, + len - total_translated, &translated); + } - ret = copy_to_iter(src, translated, &iter); if (ret < 0) return ret; @@ -1210,20 +1322,37 @@ static inline int copy_to_iotlb(const struct vringh *vrh, void *dst, static inline int getu16_iotlb(const struct vringh *vrh, u16 *val, const __virtio16 *p) { - struct bio_vec iov; - void *kaddr, *from; int ret; /* Atomic read is needed for getu16 */ - ret = iotlb_translate(vrh, (u64)(uintptr_t)p, sizeof(*p), NULL, - &iov, 1, VHOST_MAP_RO); - if (ret < 0) - return ret; + if (vrh->use_va) { + struct iovec iov; + __virtio16 tmp; + + ret = iotlb_translate_va(vrh, (u64)(uintptr_t)p, sizeof(*p), + NULL, &iov, 1, VHOST_MAP_RO); + if (ret < 0) + return ret; - kaddr = kmap_local_page(iov.bv_page); - from = kaddr + iov.bv_offset; - *val = vringh16_to_cpu(vrh, READ_ONCE(*(__virtio16 *)from)); - kunmap_local(kaddr); + ret = __get_user(tmp, (__virtio16 __user *)iov.iov_base); + if (ret) + return ret; + + *val = vringh16_to_cpu(vrh, tmp); + } else { + struct bio_vec iov; + void *kaddr, *from; + + ret = iotlb_translate_pa(vrh, (u64)(uintptr_t)p, sizeof(*p), + NULL, &iov, 1, VHOST_MAP_RO); + if (ret < 0) + return ret; + + kaddr = kmap_local_page(iov.bv_page); + from = kaddr + iov.bv_offset; + *val = vringh16_to_cpu(vrh, READ_ONCE(*(__virtio16 *)from)); + kunmap_local(kaddr); + } return 0; } @@ -1231,20 +1360,37 @@ static inline int getu16_iotlb(const struct vringh *vrh, static inline int putu16_iotlb(const struct vringh *vrh, __virtio16 *p, u16 val) { - struct bio_vec iov; - void *kaddr, *to; int ret; /* Atomic write is needed for putu16 */ - ret = iotlb_translate(vrh, (u64)(uintptr_t)p, sizeof(*p), NULL, - &iov, 1, VHOST_MAP_WO); - if (ret < 0) - return ret; + if (vrh->use_va) { + struct iovec iov; + __virtio16 tmp; - kaddr = kmap_local_page(iov.bv_page); - to = kaddr + iov.bv_offset; - WRITE_ONCE(*(__virtio16 *)to, cpu_to_vringh16(vrh, val)); - kunmap_local(kaddr); + ret = iotlb_translate_va(vrh, (u64)(uintptr_t)p, sizeof(*p), + NULL, &iov, 1, VHOST_MAP_RO); + if (ret < 0) + return ret; + + tmp = cpu_to_vringh16(vrh, val); + + ret = __put_user(tmp, (__virtio16 __user *)iov.iov_base); + if (ret) + return ret; + } else { + struct bio_vec iov; + void *kaddr, *to; + + ret = iotlb_translate_pa(vrh, (u64)(uintptr_t)p, sizeof(*p), NULL, + &iov, 1, VHOST_MAP_WO); + if (ret < 0) + return ret; + + kaddr = kmap_local_page(iov.bv_page); + to = kaddr + iov.bv_offset; + WRITE_ONCE(*(__virtio16 *)to, cpu_to_vringh16(vrh, val)); + kunmap_local(kaddr); + } return 0; } @@ -1306,6 +1452,7 @@ static inline int putused_iotlb(const struct vringh *vrh, * @features: the feature bits for this ring. * @num: the number of elements. * @weak_barriers: true if we only need memory barriers, not I/O. + * @use_va: true if IOTLB contains user VA * @desc: the userpace descriptor pointer. * @avail: the userpace avail pointer. * @used: the userpace used pointer. @@ -1313,11 +1460,13 @@ static inline int putused_iotlb(const struct vringh *vrh, * Returns an error if num is invalid. */ int vringh_init_iotlb(struct vringh *vrh, u64 features, - unsigned int num, bool weak_barriers, + unsigned int num, bool weak_barriers, bool use_va, struct vring_desc *desc, struct vring_avail *avail, struct vring_used *used) { + vrh->use_va = use_va; + return vringh_init_kern(vrh, features, num, weak_barriers, desc, avail, used); } From patchwork Thu Mar 2 11:34:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Stefano Garzarella X-Patchwork-Id: 13157133 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D434C7EE30 for ; Thu, 2 Mar 2023 11:36:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229808AbjCBLgP (ORCPT ); Thu, 2 Mar 2023 06:36:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55512 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229850AbjCBLf6 (ORCPT ); Thu, 2 Mar 2023 06:35:58 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C3E9116ADF for ; Thu, 2 Mar 2023 03:35:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1677756911; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eLgIJ3s/Tx97mhsfweK6+28YKiuYY5rTS0gdVJoPPN4=; b=e7t+tfv/pL5owOwpwApiVhoRk2GMhdyuZUS6uOwMdncYriKfhhb8o2BH1TFvDspRwF59MB ukUDlTuw9vXyHTeNatD3BNYlXaTpN1QdUBOW7WFpkLr35wRijGBkld9WG8cQUT+LjcS4io wZFqvnDyhn+61Ash6yPerrJWNu1hysc= Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-418-vMGrZXGoPUyZ9weAwjZjvg-1; Thu, 02 Mar 2023 06:35:09 -0500 X-MC-Unique: vMGrZXGoPUyZ9weAwjZjvg-1 Received: by mail-qk1-f197.google.com with SMTP id q25-20020a37f719000000b00742bfdd63ecso6353967qkj.4 for ; Thu, 02 Mar 2023 03:35:09 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1677756909; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eLgIJ3s/Tx97mhsfweK6+28YKiuYY5rTS0gdVJoPPN4=; b=MzEcGnwem1EaSoZ8yowS35I2v8C7WEjZ0IomjeBc/ZaJsmewHeN0HziuEV2WHBHB8m J0f3VDo8x9kYQq09xMRZiYqu2lBaQYhHfGKUoM4aab4tVnYkfikA1pmqAmRYzQn74byt qDpmRkUpASso8++OaVLErHOyo0lPVwwz9dbgXpwFivcYMZz6BXTP4tzkSokiHl/P14+e PEP67uCHz5jN9EuA5bpBbazjq6lnG0nSB5Ywqv69rcGFhJtD5RaLOyTx0Fo9IC8CqMvj UAC4pIm89hmD5VMPhYKFWWN6iaU8FSl8N5QQuP3nYyHLG+A01Qz1wOIhOjLjUuedCjHb UVyw== X-Gm-Message-State: AO0yUKVgXAJvLCF+zWqeRUK2bVar2Atv9e82cxBrUJXw61A6LwbxOcj/ MCHcRzh+GACTtjy0X3m34DOjXsapxa4u5reToNxHkWrvYi0L4f40XrL1e6lfbz25o2P/r/0uUe/ S3gn6o6ET7VQv X-Received: by 2002:a05:622a:1746:b0:3bf:dbd3:a014 with SMTP id l6-20020a05622a174600b003bfdbd3a014mr17136707qtk.45.1677756909329; Thu, 02 Mar 2023 03:35:09 -0800 (PST) X-Google-Smtp-Source: AK7set+7I55XBkFxw0bW+eUyOR6zvC72LFmtXSBxA9dpUftpeEGbZbGF/0NSpQjznVhJRmc2PwWsSA== X-Received: by 2002:a05:622a:1746:b0:3bf:dbd3:a014 with SMTP id l6-20020a05622a174600b003bfdbd3a014mr17136689qtk.45.1677756909086; Thu, 02 Mar 2023 03:35:09 -0800 (PST) Received: from step1.redhat.com (c-115-213.cust-q.wadsl.it. [212.43.115.213]) by smtp.gmail.com with ESMTPSA id p12-20020a37420c000000b007426ec97253sm8383058qka.111.2023.03.02.03.35.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Mar 2023 03:35:08 -0800 (PST) From: Stefano Garzarella To: virtualization@lists.linux-foundation.org Cc: Andrey Zhadchenko , eperezma@redhat.com, netdev@vger.kernel.org, stefanha@redhat.com, linux-kernel@vger.kernel.org, Jason Wang , "Michael S. Tsirkin" , kvm@vger.kernel.org, Stefano Garzarella Subject: [PATCH v2 5/8] vdpa_sim: make devices agnostic for work management Date: Thu, 2 Mar 2023 12:34:18 +0100 Message-Id: <20230302113421.174582-6-sgarzare@redhat.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230302113421.174582-1-sgarzare@redhat.com> References: <20230302113421.174582-1-sgarzare@redhat.com> MIME-Version: 1.0 Content-type: text/plain Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Let's move work management inside the vdpa_sim core. This way we can easily change how we manage the works, without having to change the devices each time. Signed-off-by: Stefano Garzarella Acked-by: Eugenio Pérez Martin Acked-by: Jason Wang --- drivers/vdpa/vdpa_sim/vdpa_sim.h | 3 ++- drivers/vdpa/vdpa_sim/vdpa_sim.c | 17 +++++++++++++++-- drivers/vdpa/vdpa_sim/vdpa_sim_blk.c | 6 ++---- drivers/vdpa/vdpa_sim/vdpa_sim_net.c | 6 ++---- 4 files changed, 21 insertions(+), 11 deletions(-) diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.h b/drivers/vdpa/vdpa_sim/vdpa_sim.h index 144858636c10..acee20faaf6a 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.h +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.h @@ -45,7 +45,7 @@ struct vdpasim_dev_attr { u32 ngroups; u32 nas; - work_func_t work_fn; + void (*work_fn)(struct vdpasim *vdpasim); void (*get_config)(struct vdpasim *vdpasim, void *config); void (*set_config)(struct vdpasim *vdpasim, const void *config); int (*get_stats)(struct vdpasim *vdpasim, u16 idx, @@ -78,6 +78,7 @@ struct vdpasim { struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *attr, const struct vdpa_dev_set_config *config); +void vdpasim_schedule_work(struct vdpasim *vdpasim); /* TODO: cross-endian support */ static inline bool vdpasim_is_little_endian(struct vdpasim *vdpasim) diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c index 481eb156658b..a6ee830efc38 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c @@ -116,6 +116,13 @@ static void vdpasim_do_reset(struct vdpasim *vdpasim) static const struct vdpa_config_ops vdpasim_config_ops; static const struct vdpa_config_ops vdpasim_batch_config_ops; +static void vdpasim_work_fn(struct work_struct *work) +{ + struct vdpasim *vdpasim = container_of(work, struct vdpasim, work); + + vdpasim->dev_attr.work_fn(vdpasim); +} + struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr, const struct vdpa_dev_set_config *config) { @@ -152,7 +159,7 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr, vdpasim = vdpa_to_sim(vdpa); vdpasim->dev_attr = *dev_attr; - INIT_WORK(&vdpasim->work, dev_attr->work_fn); + INIT_WORK(&vdpasim->work, vdpasim_work_fn); spin_lock_init(&vdpasim->lock); spin_lock_init(&vdpasim->iommu_lock); @@ -203,6 +210,12 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr, } EXPORT_SYMBOL_GPL(vdpasim_create); +void vdpasim_schedule_work(struct vdpasim *vdpasim) +{ + schedule_work(&vdpasim->work); +} +EXPORT_SYMBOL_GPL(vdpasim_schedule_work); + static int vdpasim_set_vq_address(struct vdpa_device *vdpa, u16 idx, u64 desc_area, u64 driver_area, u64 device_area) @@ -237,7 +250,7 @@ static void vdpasim_kick_vq(struct vdpa_device *vdpa, u16 idx) } if (vq->ready) - schedule_work(&vdpasim->work); + vdpasim_schedule_work(vdpasim); } static void vdpasim_set_vq_cb(struct vdpa_device *vdpa, u16 idx, diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c b/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c index 5117959bed8a..eb4897c8541e 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c @@ -11,7 +11,6 @@ #include #include #include -#include #include #include #include @@ -286,9 +285,8 @@ static bool vdpasim_blk_handle_req(struct vdpasim *vdpasim, return handled; } -static void vdpasim_blk_work(struct work_struct *work) +static void vdpasim_blk_work(struct vdpasim *vdpasim) { - struct vdpasim *vdpasim = container_of(work, struct vdpasim, work); bool reschedule = false; int i; @@ -326,7 +324,7 @@ static void vdpasim_blk_work(struct work_struct *work) spin_unlock(&vdpasim->lock); if (reschedule) - schedule_work(&vdpasim->work); + vdpasim_schedule_work(vdpasim); } static void vdpasim_blk_get_config(struct vdpasim *vdpasim, void *config) diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim_net.c b/drivers/vdpa/vdpa_sim/vdpa_sim_net.c index 862f405362de..e61a9ecbfafe 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim_net.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim_net.c @@ -11,7 +11,6 @@ #include #include #include -#include #include #include #include @@ -192,9 +191,8 @@ static void vdpasim_handle_cvq(struct vdpasim *vdpasim) u64_stats_update_end(&net->cq_stats.syncp); } -static void vdpasim_net_work(struct work_struct *work) +static void vdpasim_net_work(struct vdpasim *vdpasim) { - struct vdpasim *vdpasim = container_of(work, struct vdpasim, work); struct vdpasim_virtqueue *txq = &vdpasim->vqs[1]; struct vdpasim_virtqueue *rxq = &vdpasim->vqs[0]; struct vdpasim_net *net = sim_to_net(vdpasim); @@ -260,7 +258,7 @@ static void vdpasim_net_work(struct work_struct *work) vdpasim_net_complete(rxq, write); if (tx_pkts > 4) { - schedule_work(&vdpasim->work); + vdpasim_schedule_work(vdpasim); goto out; } } From patchwork Thu Mar 2 11:34:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Garzarella X-Patchwork-Id: 13157135 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E557C7EE33 for ; Thu, 2 Mar 2023 11:36:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230094AbjCBLgk (ORCPT ); Thu, 2 Mar 2023 06:36:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55570 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229457AbjCBLgG (ORCPT ); Thu, 2 Mar 2023 06:36:06 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EB72615168 for ; Thu, 2 Mar 2023 03:35:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1677756915; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=garxf7u2PYyPkA/AdwQgG8IfzxLSGUACcilXDazWE4o=; b=i61uytiwBC/BrFsyB2P1FdF67SB+oM2pSQibmHh33nfBmFC3zIcpiHm+PEqj0GnjU37Jsd MF7jovTJQlR4jLMkyAjp0uDfe+p+LNL4NeZysWPgpKOg0Q14216vZ5kLfH9SGQ9O8wlYF/ iH6EeZyrHa01DblUBJXpJ0ZmAbQw10o= Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com [209.85.160.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-550-8ZurzSFlP8eg1JN6uTcZWQ-1; Thu, 02 Mar 2023 06:35:13 -0500 X-MC-Unique: 8ZurzSFlP8eg1JN6uTcZWQ-1 Received: by mail-qt1-f197.google.com with SMTP id g11-20020ac8774b000000b003bfa92f56cbso8209513qtu.3 for ; Thu, 02 Mar 2023 03:35:13 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1677756913; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=garxf7u2PYyPkA/AdwQgG8IfzxLSGUACcilXDazWE4o=; b=FvZuugXFEK5CeqkntnB0O3L7rCOmj/Bk5GFONBwaUZlSLSyfZnz2ogESK36i0CZwxz H7kWdDpB1u90fnpKDJ6FVTu5f0nWchFZNOh2Mx0dBp/nWnE2c2PDm+KV3FjJapQoXGD/ KBpKHpLUBds0dWCysKobNuYrxuDySZex5Zc6+q3PSe4kqcOf6bBwr52mWziq0RQmgkmR rC+tPBBUc3o8Z/OkNM7tlMZGzI6g5wi4yQxZUljPCi7rSkRvVL/D1ri1cC88CUZBObG/ OLrniwiONgg8KPR3GaErLKmGZqdBeTa0mVHYXnOfEf9qOTL1cau6CusnlWNrIGPR6tAu XIbw== X-Gm-Message-State: AO0yUKX0Z0t7ohbMLYPK/Hdw4Fm0DRc/EDVsT3613kyjghtRduDAOPun BMBbZoGELTeYayZV5NDPZ7iTkILIw73ybQn4yPIsO09KJ/q/MaR1H9byUYvryQTS6WlV1k+o/dw YhHCvkjFaAV5m X-Received: by 2002:a05:6214:ca2:b0:56e:a620:7b39 with SMTP id s2-20020a0562140ca200b0056ea6207b39mr2884838qvs.4.1677756913184; Thu, 02 Mar 2023 03:35:13 -0800 (PST) X-Google-Smtp-Source: AK7set+PZ/909SePXSEHKpmXpgvXRGxl6z5c0ldwKhD+c5TD8rRI1RDC5SJdB6Y2WkYalG0LWvNEBA== X-Received: by 2002:a05:6214:ca2:b0:56e:a620:7b39 with SMTP id s2-20020a0562140ca200b0056ea6207b39mr2884814qvs.4.1677756912950; Thu, 02 Mar 2023 03:35:12 -0800 (PST) Received: from step1.redhat.com (c-115-213.cust-q.wadsl.it. [212.43.115.213]) by smtp.gmail.com with ESMTPSA id p12-20020a37420c000000b007426ec97253sm8383058qka.111.2023.03.02.03.35.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Mar 2023 03:35:12 -0800 (PST) From: Stefano Garzarella To: virtualization@lists.linux-foundation.org Cc: Andrey Zhadchenko , eperezma@redhat.com, netdev@vger.kernel.org, stefanha@redhat.com, linux-kernel@vger.kernel.org, Jason Wang , "Michael S. Tsirkin" , kvm@vger.kernel.org, Stefano Garzarella Subject: [PATCH v2 6/8] vdpa_sim: use kthread worker Date: Thu, 2 Mar 2023 12:34:19 +0100 Message-Id: <20230302113421.174582-7-sgarzare@redhat.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230302113421.174582-1-sgarzare@redhat.com> References: <20230302113421.174582-1-sgarzare@redhat.com> MIME-Version: 1.0 Content-type: text/plain Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Let's use our own kthread to run device jobs. This allows us more flexibility, especially we can attach the kthread to the user address space when vDPA uses user's VA. Signed-off-by: Stefano Garzarella Acked-by: Jason Wang --- drivers/vdpa/vdpa_sim/vdpa_sim.h | 3 ++- drivers/vdpa/vdpa_sim/vdpa_sim.c | 17 ++++++++++++----- 2 files changed, 14 insertions(+), 6 deletions(-) diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.h b/drivers/vdpa/vdpa_sim/vdpa_sim.h index acee20faaf6a..ce83f9130a5d 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.h +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.h @@ -57,7 +57,8 @@ struct vdpasim_dev_attr { struct vdpasim { struct vdpa_device vdpa; struct vdpasim_virtqueue *vqs; - struct work_struct work; + struct kthread_worker *worker; + struct kthread_work work; struct vdpasim_dev_attr dev_attr; /* spinlock to synchronize virtqueue state */ spinlock_t lock; diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c index a6ee830efc38..6feb29726c2a 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c @@ -11,8 +11,8 @@ #include #include #include +#include #include -#include #include #include #include @@ -116,7 +116,7 @@ static void vdpasim_do_reset(struct vdpasim *vdpasim) static const struct vdpa_config_ops vdpasim_config_ops; static const struct vdpa_config_ops vdpasim_batch_config_ops; -static void vdpasim_work_fn(struct work_struct *work) +static void vdpasim_work_fn(struct kthread_work *work) { struct vdpasim *vdpasim = container_of(work, struct vdpasim, work); @@ -159,7 +159,13 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr, vdpasim = vdpa_to_sim(vdpa); vdpasim->dev_attr = *dev_attr; - INIT_WORK(&vdpasim->work, vdpasim_work_fn); + + kthread_init_work(&vdpasim->work, vdpasim_work_fn); + vdpasim->worker = kthread_create_worker(0, "vDPA sim worker: %s", + dev_attr->name); + if (IS_ERR(vdpasim->worker)) + goto err_iommu; + spin_lock_init(&vdpasim->lock); spin_lock_init(&vdpasim->iommu_lock); @@ -212,7 +218,7 @@ EXPORT_SYMBOL_GPL(vdpasim_create); void vdpasim_schedule_work(struct vdpasim *vdpasim) { - schedule_work(&vdpasim->work); + kthread_queue_work(vdpasim->worker, &vdpasim->work); } EXPORT_SYMBOL_GPL(vdpasim_schedule_work); @@ -612,7 +618,8 @@ static void vdpasim_free(struct vdpa_device *vdpa) struct vdpasim *vdpasim = vdpa_to_sim(vdpa); int i; - cancel_work_sync(&vdpasim->work); + kthread_cancel_work_sync(&vdpasim->work); + kthread_destroy_worker(vdpasim->worker); for (i = 0; i < vdpasim->dev_attr.nvqs; i++) { vringh_kiov_cleanup(&vdpasim->vqs[i].out_iov); From patchwork Thu Mar 2 11:34:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Garzarella X-Patchwork-Id: 13157134 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5A3EC7EE30 for ; Thu, 2 Mar 2023 11:36:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229894AbjCBLgi (ORCPT ); Thu, 2 Mar 2023 06:36:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55654 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230013AbjCBLgD (ORCPT ); Thu, 2 Mar 2023 06:36:03 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6AD6C1E1FD for ; Thu, 2 Mar 2023 03:35:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1677756918; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Gj3dXUcrOW7GQbHC3bzCmguoR9a2asK1vNem232OL7c=; b=LpHHUkYUuZjSnP377+4wqhPPWp9ul9x3fPMBvM1LzaZTktdtq5gLPJkTMnEiit1aenrVLY egBJXdD4OFIkfIpjyfkZw21dMaqMi4TBMZsbpSIBdAUkm2qqCDJ34Adi6wenQdnnimFJAN NNYpRihr1kb9ZoNJ/Ztj6cHqNOp5Ruc= Received: from mail-qk1-f199.google.com (mail-qk1-f199.google.com [209.85.222.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-426-7MFEPPjwNYuz_3IlT-tT4g-1; Thu, 02 Mar 2023 06:35:17 -0500 X-MC-Unique: 7MFEPPjwNYuz_3IlT-tT4g-1 Received: by mail-qk1-f199.google.com with SMTP id d72-20020ae9ef4b000000b0072db6346c39so9869384qkg.16 for ; Thu, 02 Mar 2023 03:35:17 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1677756917; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Gj3dXUcrOW7GQbHC3bzCmguoR9a2asK1vNem232OL7c=; b=5bX07li4Bw6XdFOI6Q2oBa3wEoxXJ1h2a2Tt1CahoU0uMWiuRCu1pevGTAK5eWdKPM PEzeVGNgrJWGYc/49zwGlmTM524HNI8AVe3nHKJkb8Egy6jHAmUAzW9Xcy1mPmrhjGW2 0cnvwv4Vq7wUXv2LrDTyo3xyXNw0qW+umaqrGimS4lpih3/tf4QuhJoRkXDXwvoH++Gl BMvRESvm1TRiu9+PVUawf9yjHH2JhrpwvMzMhs6Zhufm+y8ISHUav9RzJ+OFWxX7UguT qBsD2D2vlAfUQqV9qQzuK6GRXX70UknY2xbubFd0Dw+vECXEScSm8rSHY2pWXmrUscRT 54jw== X-Gm-Message-State: AO0yUKUcEGstHV9h28VMSLFBvOscQpmTWwBgHiP6kEm83YhHBmJHjXwT TWGbJP10M0+kN3rYxckcpmCrlQUme4gLnVUBLa8bZlv2KkjNmTuCCTOtDbJ0PxY8p7nWezHplPv pk+tMfhXdkWbr X-Received: by 2002:ac8:594f:0:b0:3a7:e9a2:4f4a with SMTP id 15-20020ac8594f000000b003a7e9a24f4amr16171735qtz.8.1677756916954; Thu, 02 Mar 2023 03:35:16 -0800 (PST) X-Google-Smtp-Source: AK7set9qGmJaKuL+3K0McFJot1ZlCw7XvULWgrVcRPD18Yd6j00eYkZLuQoDKp7LYfL8P/tkphEXgw== X-Received: by 2002:ac8:594f:0:b0:3a7:e9a2:4f4a with SMTP id 15-20020ac8594f000000b003a7e9a24f4amr16171717qtz.8.1677756916710; Thu, 02 Mar 2023 03:35:16 -0800 (PST) Received: from step1.redhat.com (c-115-213.cust-q.wadsl.it. [212.43.115.213]) by smtp.gmail.com with ESMTPSA id p12-20020a37420c000000b007426ec97253sm8383058qka.111.2023.03.02.03.35.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Mar 2023 03:35:16 -0800 (PST) From: Stefano Garzarella To: virtualization@lists.linux-foundation.org Cc: Andrey Zhadchenko , eperezma@redhat.com, netdev@vger.kernel.org, stefanha@redhat.com, linux-kernel@vger.kernel.org, Jason Wang , "Michael S. Tsirkin" , kvm@vger.kernel.org, Stefano Garzarella Subject: [PATCH v2 7/8] vdpa_sim: replace the spinlock with a mutex to protect the state Date: Thu, 2 Mar 2023 12:34:20 +0100 Message-Id: <20230302113421.174582-8-sgarzare@redhat.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230302113421.174582-1-sgarzare@redhat.com> References: <20230302113421.174582-1-sgarzare@redhat.com> MIME-Version: 1.0 Content-type: text/plain Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The spinlock we use to protect the state of the simulator is sometimes held for a long time (for example, when devices handle requests). This also prevents us from calling functions that might sleep (such as kthread_flush_work() in the next patch), and thus having to release and retake the lock. For these reasons, let's replace the spinlock with a mutex that gives us more flexibility. Suggested-by: Jason Wang Signed-off-by: Stefano Garzarella Acked-by: Jason Wang --- drivers/vdpa/vdpa_sim/vdpa_sim.h | 4 ++-- drivers/vdpa/vdpa_sim/vdpa_sim.c | 34 ++++++++++++++-------------- drivers/vdpa/vdpa_sim/vdpa_sim_blk.c | 4 ++-- drivers/vdpa/vdpa_sim/vdpa_sim_net.c | 4 ++-- 4 files changed, 23 insertions(+), 23 deletions(-) diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.h b/drivers/vdpa/vdpa_sim/vdpa_sim.h index ce83f9130a5d..4774292fba8c 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.h +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.h @@ -60,8 +60,8 @@ struct vdpasim { struct kthread_worker *worker; struct kthread_work work; struct vdpasim_dev_attr dev_attr; - /* spinlock to synchronize virtqueue state */ - spinlock_t lock; + /* mutex to synchronize virtqueue state */ + struct mutex mutex; /* virtio config according to device type */ void *config; struct vhost_iotlb *iommu; diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c index 6feb29726c2a..a28103a67ae7 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c @@ -166,7 +166,7 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr, if (IS_ERR(vdpasim->worker)) goto err_iommu; - spin_lock_init(&vdpasim->lock); + mutex_init(&vdpasim->mutex); spin_lock_init(&vdpasim->iommu_lock); dev = &vdpasim->vdpa.dev; @@ -275,13 +275,13 @@ static void vdpasim_set_vq_ready(struct vdpa_device *vdpa, u16 idx, bool ready) struct vdpasim_virtqueue *vq = &vdpasim->vqs[idx]; bool old_ready; - spin_lock(&vdpasim->lock); + mutex_lock(&vdpasim->mutex); old_ready = vq->ready; vq->ready = ready; if (vq->ready && !old_ready) { vdpasim_queue_ready(vdpasim, idx); } - spin_unlock(&vdpasim->lock); + mutex_unlock(&vdpasim->mutex); } static bool vdpasim_get_vq_ready(struct vdpa_device *vdpa, u16 idx) @@ -299,9 +299,9 @@ static int vdpasim_set_vq_state(struct vdpa_device *vdpa, u16 idx, struct vdpasim_virtqueue *vq = &vdpasim->vqs[idx]; struct vringh *vrh = &vq->vring; - spin_lock(&vdpasim->lock); + mutex_lock(&vdpasim->mutex); vrh->last_avail_idx = state->split.avail_index; - spin_unlock(&vdpasim->lock); + mutex_unlock(&vdpasim->mutex); return 0; } @@ -398,9 +398,9 @@ static u8 vdpasim_get_status(struct vdpa_device *vdpa) struct vdpasim *vdpasim = vdpa_to_sim(vdpa); u8 status; - spin_lock(&vdpasim->lock); + mutex_lock(&vdpasim->mutex); status = vdpasim->status; - spin_unlock(&vdpasim->lock); + mutex_unlock(&vdpasim->mutex); return status; } @@ -409,19 +409,19 @@ static void vdpasim_set_status(struct vdpa_device *vdpa, u8 status) { struct vdpasim *vdpasim = vdpa_to_sim(vdpa); - spin_lock(&vdpasim->lock); + mutex_lock(&vdpasim->mutex); vdpasim->status = status; - spin_unlock(&vdpasim->lock); + mutex_unlock(&vdpasim->mutex); } static int vdpasim_reset(struct vdpa_device *vdpa) { struct vdpasim *vdpasim = vdpa_to_sim(vdpa); - spin_lock(&vdpasim->lock); + mutex_lock(&vdpasim->mutex); vdpasim->status = 0; vdpasim_do_reset(vdpasim); - spin_unlock(&vdpasim->lock); + mutex_unlock(&vdpasim->mutex); return 0; } @@ -430,9 +430,9 @@ static int vdpasim_suspend(struct vdpa_device *vdpa) { struct vdpasim *vdpasim = vdpa_to_sim(vdpa); - spin_lock(&vdpasim->lock); + mutex_lock(&vdpasim->mutex); vdpasim->running = false; - spin_unlock(&vdpasim->lock); + mutex_unlock(&vdpasim->mutex); return 0; } @@ -442,7 +442,7 @@ static int vdpasim_resume(struct vdpa_device *vdpa) struct vdpasim *vdpasim = vdpa_to_sim(vdpa); int i; - spin_lock(&vdpasim->lock); + mutex_lock(&vdpasim->mutex); vdpasim->running = true; if (vdpasim->pending_kick) { @@ -453,7 +453,7 @@ static int vdpasim_resume(struct vdpa_device *vdpa) vdpasim->pending_kick = false; } - spin_unlock(&vdpasim->lock); + mutex_unlock(&vdpasim->mutex); return 0; } @@ -525,14 +525,14 @@ static int vdpasim_set_group_asid(struct vdpa_device *vdpa, unsigned int group, iommu = &vdpasim->iommu[asid]; - spin_lock(&vdpasim->lock); + mutex_lock(&vdpasim->mutex); for (i = 0; i < vdpasim->dev_attr.nvqs; i++) if (vdpasim_get_vq_group(vdpa, i) == group) vringh_set_iotlb(&vdpasim->vqs[i].vring, iommu, &vdpasim->iommu_lock); - spin_unlock(&vdpasim->lock); + mutex_unlock(&vdpasim->mutex); return 0; } diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c b/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c index eb4897c8541e..568119e1553f 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim_blk.c @@ -290,7 +290,7 @@ static void vdpasim_blk_work(struct vdpasim *vdpasim) bool reschedule = false; int i; - spin_lock(&vdpasim->lock); + mutex_lock(&vdpasim->mutex); if (!(vdpasim->status & VIRTIO_CONFIG_S_DRIVER_OK)) goto out; @@ -321,7 +321,7 @@ static void vdpasim_blk_work(struct vdpasim *vdpasim) } } out: - spin_unlock(&vdpasim->lock); + mutex_unlock(&vdpasim->mutex); if (reschedule) vdpasim_schedule_work(vdpasim); diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim_net.c b/drivers/vdpa/vdpa_sim/vdpa_sim_net.c index e61a9ecbfafe..7ab434592bfe 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim_net.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim_net.c @@ -201,7 +201,7 @@ static void vdpasim_net_work(struct vdpasim *vdpasim) u64 rx_drops = 0, rx_overruns = 0, rx_errors = 0, tx_errors = 0; int err; - spin_lock(&vdpasim->lock); + mutex_lock(&vdpasim->mutex); if (!vdpasim->running) goto out; @@ -264,7 +264,7 @@ static void vdpasim_net_work(struct vdpasim *vdpasim) } out: - spin_unlock(&vdpasim->lock); + mutex_unlock(&vdpasim->mutex); u64_stats_update_begin(&net->tx_stats.syncp); net->tx_stats.pkts += tx_pkts; From patchwork Thu Mar 2 11:34:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefano Garzarella X-Patchwork-Id: 13157136 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54CEFC6FA8E for ; Thu, 2 Mar 2023 11:37:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230105AbjCBLhO (ORCPT ); Thu, 2 Mar 2023 06:37:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57114 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229861AbjCBLg6 (ORCPT ); Thu, 2 Mar 2023 06:36:58 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 433054AFD7 for ; Thu, 2 Mar 2023 03:35:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1677756946; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8ei7n2l78e4NFmvUi+bOVTMUuSnDAXEzTgyerL27l10=; b=SiKWFQEsjux9pg7VcflKRjwXsMx2t4MEgSQjLUbocfexN4n7yUTyFIwasVmDxJk7tWjTSE vfsCID+KZCOf3eTyu0KKix0o1V3uCBb+3jEAvXLriy85RpzoTFSyVpP39qdFtbcuA+uTE1 Opol4lLm/zofmnMhSnlIl8IFoZBfo68= Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-28-S7RsMFl4O9eGqKmN52DCLw-1; Thu, 02 Mar 2023 06:35:45 -0500 X-MC-Unique: S7RsMFl4O9eGqKmN52DCLw-1 Received: by mail-qt1-f199.google.com with SMTP id g20-20020ac870d4000000b003b9c1013018so8260904qtp.18 for ; Thu, 02 Mar 2023 03:35:44 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1677756944; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8ei7n2l78e4NFmvUi+bOVTMUuSnDAXEzTgyerL27l10=; b=gBIZ3GuVNUArn5HCp1VxBhDvMbLCuOX2CR3vLh47N105yJzr8g6zc58C4SYCRuCDFA X29VUtUZaZ2ytgywnLgCVETrVxAQo6gd8MJL5XPV0KPOY9C/DYx0exHwCYPkLsCnn4RM WUe8HfsLtI5hgVbJvTVD3/T2UYDUzAQyyki5cJK5U4PgkS4VALtk+yD3kmXCZKAsinXA vBbFhVFNeczU3ilwUxz2YrNZGYAZq+yYp0NYggifTMW4OBtFrNvuxTk854kxFh/F1PUr kpCgb/y4zoHd39eogL2mNsRikQ3qd/6j9mDGmqAztZAJMzk2Iy+658C0ipOY8q7wyML7 MykA== X-Gm-Message-State: AO0yUKWHPW/i/8GOLI+fRz9xFCrJFYLGlD7bwPUR914rqbS3WCVSFOg2 jxKF3tP3z8KO909lGK1ACiE6sN6Dn3KiZ5AS8cB+DrjH76NuJORID9WGrsOlmFkD6+t5UvaOgGC N65RsGznOvz6Q X-Received: by 2002:a05:6214:23cc:b0:56f:796e:c3a5 with SMTP id hr12-20020a05621423cc00b0056f796ec3a5mr15695869qvb.4.1677756944507; Thu, 02 Mar 2023 03:35:44 -0800 (PST) X-Google-Smtp-Source: AK7set8Ak8Ucf6C6K7kn5+/O12tTcTPMHjMyg5Hg/gChXMJzzgv24g/Sgnux1N4dFSCrO8nMsW3JDg== X-Received: by 2002:a05:6214:23cc:b0:56f:796e:c3a5 with SMTP id hr12-20020a05621423cc00b0056f796ec3a5mr15695849qvb.4.1677756944236; Thu, 02 Mar 2023 03:35:44 -0800 (PST) Received: from step1.redhat.com (c-115-213.cust-q.wadsl.it. [212.43.115.213]) by smtp.gmail.com with ESMTPSA id m26-20020ae9e01a000000b00741d87eb5d1sm10630925qkk.105.2023.03.02.03.35.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Mar 2023 03:35:43 -0800 (PST) From: Stefano Garzarella To: virtualization@lists.linux-foundation.org Cc: Andrey Zhadchenko , eperezma@redhat.com, netdev@vger.kernel.org, stefanha@redhat.com, linux-kernel@vger.kernel.org, Jason Wang , "Michael S. Tsirkin" , kvm@vger.kernel.org, Stefano Garzarella Subject: [PATCH v2 8/8] vdpa_sim: add support for user VA Date: Thu, 2 Mar 2023 12:34:21 +0100 Message-Id: <20230302113421.174582-9-sgarzare@redhat.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230302113421.174582-1-sgarzare@redhat.com> References: <20230302113421.174582-1-sgarzare@redhat.com> MIME-Version: 1.0 Content-type: text/plain Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The new "use_va" module parameter (default: false) is used in vdpa_alloc_device() to inform the vDPA framework that the device supports VA. vringh is initialized to use VA only when "use_va" is true and the user's mm has been bound. So, only when the bus supports user VA (e.g. vhost-vdpa). vdpasim_mm_work_fn work is used to attach the kthread to the user address space when the .bind_mm callback is invoked, and to detach it when the .unbind_mm callback is invoked. Signed-off-by: Stefano Garzarella --- Notes: v2: - `use_va` set to true by default [Eugenio] - supported the new unbind_mm callback [Jason] - removed the unbind_mm call in vdpasim_do_reset() [Jason] - avoided to release the lock while call kthread_flush_work() since we are now using a mutex to protect the device state drivers/vdpa/vdpa_sim/vdpa_sim.h | 1 + drivers/vdpa/vdpa_sim/vdpa_sim.c | 98 +++++++++++++++++++++++++++++++- 2 files changed, 97 insertions(+), 2 deletions(-) diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.h b/drivers/vdpa/vdpa_sim/vdpa_sim.h index 4774292fba8c..3a42887d05d9 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.h +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.h @@ -59,6 +59,7 @@ struct vdpasim { struct vdpasim_virtqueue *vqs; struct kthread_worker *worker; struct kthread_work work; + struct mm_struct *mm_bound; struct vdpasim_dev_attr dev_attr; /* mutex to synchronize virtqueue state */ struct mutex mutex; diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c index a28103a67ae7..eda26bc33df5 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c @@ -35,10 +35,77 @@ module_param(max_iotlb_entries, int, 0444); MODULE_PARM_DESC(max_iotlb_entries, "Maximum number of iotlb entries for each address space. 0 means unlimited. (default: 2048)"); +static bool use_va = true; +module_param(use_va, bool, 0444); +MODULE_PARM_DESC(use_va, "Enable/disable the device's ability to use VA"); + #define VDPASIM_QUEUE_ALIGN PAGE_SIZE #define VDPASIM_QUEUE_MAX 256 #define VDPASIM_VENDOR_ID 0 +struct vdpasim_mm_work { + struct kthread_work work; + struct mm_struct *mm; + bool bind; + int ret; +}; + +static void vdpasim_mm_work_fn(struct kthread_work *work) +{ + struct vdpasim_mm_work *mm_work = + container_of(work, struct vdpasim_mm_work, work); + + mm_work->ret = 0; + + if (mm_work->bind) { + kthread_use_mm(mm_work->mm); + //TODO: should we attach the cgroup of the mm owner? + } else { + kthread_unuse_mm(mm_work->mm); + } +} + +static void vdpasim_worker_queue_mm(struct vdpasim *vdpasim, + struct vdpasim_mm_work *mm_work) +{ + struct kthread_work *work = &mm_work->work; + + kthread_init_work(work, vdpasim_mm_work_fn); + kthread_queue_work(vdpasim->worker, work); + + kthread_flush_work(work); +} + +static int vdpasim_worker_bind_mm(struct vdpasim *vdpasim, + struct mm_struct *new_mm) +{ + struct vdpasim_mm_work mm_work; + + mm_work.mm = new_mm; + mm_work.bind = true; + + vdpasim_worker_queue_mm(vdpasim, &mm_work); + + if (!mm_work.ret) + vdpasim->mm_bound = new_mm; + + return mm_work.ret; +} + +static void vdpasim_worker_unbind_mm(struct vdpasim *vdpasim) +{ + struct vdpasim_mm_work mm_work; + + if (!vdpasim->mm_bound) + return; + + mm_work.mm = vdpasim->mm_bound; + mm_work.bind = false; + + vdpasim_worker_queue_mm(vdpasim, &mm_work); + + vdpasim->mm_bound = NULL; +} static struct vdpasim *vdpa_to_sim(struct vdpa_device *vdpa) { return container_of(vdpa, struct vdpasim, vdpa); @@ -59,8 +126,10 @@ static void vdpasim_queue_ready(struct vdpasim *vdpasim, unsigned int idx) { struct vdpasim_virtqueue *vq = &vdpasim->vqs[idx]; uint16_t last_avail_idx = vq->vring.last_avail_idx; + bool va_enabled = use_va && vdpasim->mm_bound; - vringh_init_iotlb(&vq->vring, vdpasim->features, vq->num, true, false, + vringh_init_iotlb(&vq->vring, vdpasim->features, vq->num, true, + va_enabled, (struct vring_desc *)(uintptr_t)vq->desc_addr, (struct vring_avail *) (uintptr_t)vq->driver_addr, @@ -151,7 +220,7 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr, vdpa = __vdpa_alloc_device(NULL, ops, dev_attr->ngroups, dev_attr->nas, dev_attr->alloc_size, - dev_attr->name, false); + dev_attr->name, use_va); if (IS_ERR(vdpa)) { ret = PTR_ERR(vdpa); goto err_alloc; @@ -571,6 +640,27 @@ static int vdpasim_set_map(struct vdpa_device *vdpa, unsigned int asid, return ret; } +static int vdpasim_bind_mm(struct vdpa_device *vdpa, struct mm_struct *mm) +{ + struct vdpasim *vdpasim = vdpa_to_sim(vdpa); + int ret; + + mutex_lock(&vdpasim->mutex); + ret = vdpasim_worker_bind_mm(vdpasim, mm); + mutex_unlock(&vdpasim->mutex); + + return ret; +} + +static void vdpasim_unbind_mm(struct vdpa_device *vdpa) +{ + struct vdpasim *vdpasim = vdpa_to_sim(vdpa); + + mutex_lock(&vdpasim->mutex); + vdpasim_worker_unbind_mm(vdpasim); + mutex_unlock(&vdpasim->mutex); +} + static int vdpasim_dma_map(struct vdpa_device *vdpa, unsigned int asid, u64 iova, u64 size, u64 pa, u32 perm, void *opaque) @@ -667,6 +757,8 @@ static const struct vdpa_config_ops vdpasim_config_ops = { .set_group_asid = vdpasim_set_group_asid, .dma_map = vdpasim_dma_map, .dma_unmap = vdpasim_dma_unmap, + .bind_mm = vdpasim_bind_mm, + .unbind_mm = vdpasim_unbind_mm, .free = vdpasim_free, }; @@ -701,6 +793,8 @@ static const struct vdpa_config_ops vdpasim_batch_config_ops = { .get_iova_range = vdpasim_get_iova_range, .set_group_asid = vdpasim_set_group_asid, .set_map = vdpasim_set_map, + .bind_mm = vdpasim_bind_mm, + .unbind_mm = vdpasim_unbind_mm, .free = vdpasim_free, };