From patchwork Thu Sep 30 15:30:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vivek Goyal X-Patchwork-Id: 12528739 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1705C433EF for ; Thu, 30 Sep 2021 15:39:19 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 65F0361881 for ; Thu, 30 Sep 2021 15:39:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 65F0361881 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=nongnu.org Received: from localhost ([::1]:57856 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mVy9W-0000vU-Ia for qemu-devel@archiver.kernel.org; Thu, 30 Sep 2021 11:39:18 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:36698) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mVy1Q-0004cv-Cd for qemu-devel@nongnu.org; Thu, 30 Sep 2021 11:30:58 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:26180) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mVy1M-0002KV-Ps for qemu-devel@nongnu.org; Thu, 30 Sep 2021 11:30:54 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1633015850; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qyr3yv4vUAQfnUWEUDYrFC705fmtKP48APeBDAgvsU8=; b=aHGjlfWudgNcDGcOnAYgG7RyZD0L+n0fC4uckEvmUqcH4WkXqw0/YeQ4A7CW1mpSwqTifZ JEHEVf3IPkZ2rEMDQ6Sizw6FnQLHD0f7lWkq0YtTDPYHdkWWGnkJzGaZXCPSkdZDmYYg8y 7AiVmf94iqYZUF7O3aPT3bmVHPAyjzw= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-38-y6iHBIGENS6-Hde8OFpWnw-1; Thu, 30 Sep 2021 11:30:49 -0400 X-MC-Unique: y6iHBIGENS6-Hde8OFpWnw-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 6D6E61966328; Thu, 30 Sep 2021 15:30:48 +0000 (UTC) Received: from horse.redhat.com (unknown [10.22.16.146]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4977F5C1A1; Thu, 30 Sep 2021 15:30:48 +0000 (UTC) Received: by horse.redhat.com (Postfix, from userid 10451) id C8B32228280; Thu, 30 Sep 2021 11:30:47 -0400 (EDT) From: Vivek Goyal To: qemu-devel@nongnu.org, virtio-fs@redhat.com, stefanha@redhat.com Subject: [PATCH 01/13] virtio_fs.h: Add notification queue feature bit Date: Thu, 30 Sep 2021 11:30:25 -0400 Message-Id: <20210930153037.1194279-2-vgoyal@redhat.com> In-Reply-To: <20210930153037.1194279-1-vgoyal@redhat.com> References: <20210930153037.1194279-1-vgoyal@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=vgoyal@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=170.10.133.124; envelope-from=vgoyal@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jaggel@bu.edu, iangelak@redhat.com, dgilbert@redhat.com, vgoyal@redhat.com, miklos@szeredi.hu Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" This change will ultimately come from kernel as kernel header file update when kernel patches get merged. Signed-off-by: Vivek Goyal Reviewed-by: Stefan Hajnoczi --- include/standard-headers/linux/virtio_fs.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/include/standard-headers/linux/virtio_fs.h b/include/standard-headers/linux/virtio_fs.h index a32fe8a64c..b7f015186e 100644 --- a/include/standard-headers/linux/virtio_fs.h +++ b/include/standard-headers/linux/virtio_fs.h @@ -8,6 +8,9 @@ #include "standard-headers/linux/virtio_config.h" #include "standard-headers/linux/virtio_types.h" +/* Feature bits. Notification queue supported */ +#define VIRTIO_FS_F_NOTIFICATION 0 + struct virtio_fs_config { /* Filesystem name (UTF-8, not NUL-terminated, padded with NULs) */ uint8_t tag[36]; From patchwork Thu Sep 30 15:30:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vivek Goyal X-Patchwork-Id: 12528825 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4403C433F5 for ; Thu, 30 Sep 2021 16:05:04 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8D4E5613A9 for ; Thu, 30 Sep 2021 16:05:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 8D4E5613A9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=nongnu.org Received: from localhost ([::1]:60860 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mVyYR-0007b1-Fr for qemu-devel@archiver.kernel.org; Thu, 30 Sep 2021 12:05:03 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:36858) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mVy1c-00057j-F8 for qemu-devel@nongnu.org; Thu, 30 Sep 2021 11:31:08 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:55555) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mVy1a-0002Xb-Pc for qemu-devel@nongnu.org; Thu, 30 Sep 2021 11:31:08 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1633015866; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6VPDvlcrpZ2AsiC5cScB9dMXGiEwsSW+lSbyy/+MdwU=; b=CXGxW+MXm2pYknauFM8mokGezTTpPlxPuSbNUoXfRR9vVWL5GosK6irAhJoLJxexiQh0ly ZPRaVxnXtyzTrg4CQXPIk1G2lnCEONRSSLyzsiYloIyL4U81E5XkEwajFU9T1pZWH8iLnk +wWa3vflYSkvPGlMHG0P5ua7scdZqVw= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-519-CbnGJfFgOtqE5JWJ8Men7g-1; Thu, 30 Sep 2021 11:31:02 -0400 X-MC-Unique: CbnGJfFgOtqE5JWJ8Men7g-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 55D4610168C7; Thu, 30 Sep 2021 15:31:01 +0000 (UTC) Received: from horse.redhat.com (unknown [10.22.16.146]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3D96519729; Thu, 30 Sep 2021 15:30:48 +0000 (UTC) Received: by horse.redhat.com (Postfix, from userid 10451) id CCE3C228281; Thu, 30 Sep 2021 11:30:47 -0400 (EDT) From: Vivek Goyal To: qemu-devel@nongnu.org, virtio-fs@redhat.com, stefanha@redhat.com Subject: [PATCH 02/13] virtiofsd: fuse.h header file changes for lock notification Date: Thu, 30 Sep 2021 11:30:26 -0400 Message-Id: <20210930153037.1194279-3-vgoyal@redhat.com> In-Reply-To: <20210930153037.1194279-1-vgoyal@redhat.com> References: <20210930153037.1194279-1-vgoyal@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=vgoyal@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=216.205.24.124; envelope-from=vgoyal@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jaggel@bu.edu, iangelak@redhat.com, dgilbert@redhat.com, vgoyal@redhat.com, miklos@szeredi.hu Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" This change comes from fuse.h kernel header file udpate. Hence keeping it in a separate patch. Signed-off-by: Vivek Goyal --- include/standard-headers/linux/fuse.h | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/include/standard-headers/linux/fuse.h b/include/standard-headers/linux/fuse.h index cce105bfba..0b6218d569 100644 --- a/include/standard-headers/linux/fuse.h +++ b/include/standard-headers/linux/fuse.h @@ -181,6 +181,8 @@ * - add FUSE_OPEN_KILL_SUIDGID * - extend fuse_setxattr_in, add FUSE_SETXATTR_EXT * - add FUSE_SETXATTR_ACL_KILL_SGID + * 7.35 + * - add FUSE_NOTIFY_LOCK */ #ifndef _LINUX_FUSE_H @@ -212,7 +214,7 @@ #define FUSE_KERNEL_VERSION 7 /** Minor version number of this interface */ -#define FUSE_KERNEL_MINOR_VERSION 33 +#define FUSE_KERNEL_MINOR_VERSION 35 /** The node ID of the root inode */ #define FUSE_ROOT_ID 1 @@ -521,6 +523,7 @@ enum fuse_notify_code { FUSE_NOTIFY_STORE = 4, FUSE_NOTIFY_RETRIEVE = 5, FUSE_NOTIFY_DELETE = 6, + FUSE_NOTIFY_LOCK = 7, FUSE_NOTIFY_CODE_MAX, }; @@ -912,6 +915,12 @@ struct fuse_notify_retrieve_in { uint64_t dummy4; }; +struct fuse_notify_lock_out { + uint64_t unique; + int32_t error; + int32_t padding; +}; + /* Device ioctls: */ #define FUSE_DEV_IOC_MAGIC 229 #define FUSE_DEV_IOC_CLONE _IOR(FUSE_DEV_IOC_MAGIC, 0, uint32_t) From patchwork Thu Sep 30 15:30:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vivek Goyal X-Patchwork-Id: 12528727 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80289C433EF for ; Thu, 30 Sep 2021 15:34:40 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 27C66619F5 for ; Thu, 30 Sep 2021 15:34:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 27C66619F5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=nongnu.org Received: from localhost ([::1]:45656 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mVy51-0000fN-2L for qemu-devel@archiver.kernel.org; Thu, 30 Sep 2021 11:34:39 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:36794) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mVy1Z-00051r-FL for qemu-devel@nongnu.org; Thu, 30 Sep 2021 11:31:05 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:60031) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mVy1V-0002TR-TG for qemu-devel@nongnu.org; Thu, 30 Sep 2021 11:31:05 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1633015861; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cnQmG3wpzhjMCk5JuEtAPjLrD5OGTNRTDuJ/cszHA2o=; b=ZWdmgYdv8dHSssvS0QY8YhWEpYIQIERGJrpabvP3mlzsp85bTNwL5SHZVZPHS0EhpTwpMn yfoLffblt2an0vh3qvU4BrgQYc9U4vu0JkRcza/Ksy7LsHV027ZUD2gSJd8kTE/GatlMSV FHgcfZqpHx+xItMt/9q91lieMXOtv9I= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-254-FLZhIBUVPYy12V75RoMSLQ-1; Thu, 30 Sep 2021 11:30:59 -0400 X-MC-Unique: FLZhIBUVPYy12V75RoMSLQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 376A51006AA5; Thu, 30 Sep 2021 15:30:56 +0000 (UTC) Received: from horse.redhat.com (unknown [10.22.16.146]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3D81610016F4; Thu, 30 Sep 2021 15:30:48 +0000 (UTC) Received: by horse.redhat.com (Postfix, from userid 10451) id D2227228282; Thu, 30 Sep 2021 11:30:47 -0400 (EDT) From: Vivek Goyal To: qemu-devel@nongnu.org, virtio-fs@redhat.com, stefanha@redhat.com Subject: [PATCH 03/13] virtiofsd: Remove unused virtio_fs_config definition Date: Thu, 30 Sep 2021 11:30:27 -0400 Message-Id: <20210930153037.1194279-4-vgoyal@redhat.com> In-Reply-To: <20210930153037.1194279-1-vgoyal@redhat.com> References: <20210930153037.1194279-1-vgoyal@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=vgoyal@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=170.10.133.124; envelope-from=vgoyal@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -8 X-Spam_score: -0.9 X-Spam_bar: / X-Spam_report: (-0.9 / 5.0 requ) DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jaggel@bu.edu, iangelak@redhat.com, dgilbert@redhat.com, vgoyal@redhat.com, miklos@szeredi.hu Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" "struct virtio_fs_config" definition seems to be unused in fuse_virtio.c. Remove it. Signed-off-by: Vivek Goyal Reviewed-by: Stefan Hajnoczi --- tools/virtiofsd/fuse_virtio.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/tools/virtiofsd/fuse_virtio.c b/tools/virtiofsd/fuse_virtio.c index 8f4fd165b9..da7b6a76bf 100644 --- a/tools/virtiofsd/fuse_virtio.c +++ b/tools/virtiofsd/fuse_virtio.c @@ -82,12 +82,6 @@ struct fv_VuDev { struct fv_QueueInfo **qi; }; -/* From spec */ -struct virtio_fs_config { - char tag[36]; - uint32_t num_queues; -}; - /* Callback from libvhost-user */ static uint64_t fv_get_features(VuDev *dev) { From patchwork Thu Sep 30 15:30:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vivek Goyal X-Patchwork-Id: 12528827 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D654C433EF for ; Thu, 30 Sep 2021 16:08:06 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D9CC161353 for ; Thu, 30 Sep 2021 16:08:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D9CC161353 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=nongnu.org Received: from localhost ([::1]:40694 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mVybM-0004j8-VD for qemu-devel@archiver.kernel.org; Thu, 30 Sep 2021 12:08:04 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:36926) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mVy1m-0005QK-G5 for qemu-devel@nongnu.org; Thu, 30 Sep 2021 11:31:19 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:45872) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mVy1k-0002fB-Lh for qemu-devel@nongnu.org; Thu, 30 Sep 2021 11:31:18 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1633015876; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=h9+5hCXy0/62bhhp5Pl6oTCnE7oMj6W0sRf7WYmZiy8=; b=Ii9DsZRGP/FaOnokOjhprN1tmx15CVe9dg9lD8Vq2BxsrZjt9bpkZdnFDtgxK+8hmb/mFn x+oNqk1H/64rYv/cx6BChXAZ7jeBeRrn7pNG2xoQjFgmaLcL1NhVzoSK69YXmi48L6goD2 ZcSVvlyxFjz3OwmZSefOoXavPKtQjug= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-485-enADIkeaP2ewSrMLG3VDuw-1; Thu, 30 Sep 2021 11:31:14 -0400 X-MC-Unique: enADIkeaP2ewSrMLG3VDuw-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 64CB9802947; Thu, 30 Sep 2021 15:31:13 +0000 (UTC) Received: from horse.redhat.com (unknown [10.22.16.146]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4985660877; Thu, 30 Sep 2021 15:30:48 +0000 (UTC) Received: by horse.redhat.com (Postfix, from userid 10451) id D7059228283; Thu, 30 Sep 2021 11:30:47 -0400 (EDT) From: Vivek Goyal To: qemu-devel@nongnu.org, virtio-fs@redhat.com, stefanha@redhat.com Subject: [PATCH 04/13] virtiofsd: Add a helper to send element on virtqueue Date: Thu, 30 Sep 2021 11:30:28 -0400 Message-Id: <20210930153037.1194279-5-vgoyal@redhat.com> In-Reply-To: <20210930153037.1194279-1-vgoyal@redhat.com> References: <20210930153037.1194279-1-vgoyal@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=vgoyal@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=216.205.24.124; envelope-from=vgoyal@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jaggel@bu.edu, iangelak@redhat.com, dgilbert@redhat.com, vgoyal@redhat.com, miklos@szeredi.hu Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" We have open coded logic to take locks and push element on virtqueue at three places. Add a helper and use it everywhere. Code is easier to read and less number of lines of code. Signed-off-by: Vivek Goyal Reviewed-by: Stefan Hajnoczi --- tools/virtiofsd/fuse_virtio.c | 45 ++++++++++++++--------------------- 1 file changed, 18 insertions(+), 27 deletions(-) diff --git a/tools/virtiofsd/fuse_virtio.c b/tools/virtiofsd/fuse_virtio.c index da7b6a76bf..fcf12db9cd 100644 --- a/tools/virtiofsd/fuse_virtio.c +++ b/tools/virtiofsd/fuse_virtio.c @@ -243,6 +243,21 @@ static void vu_dispatch_unlock(struct fv_VuDev *vud) assert(ret == 0); } +static void vq_send_element(struct fv_QueueInfo *qi, VuVirtqElement *elem, + ssize_t len) +{ + struct fuse_session *se = qi->virtio_dev->se; + VuDev *dev = &se->virtio_dev->dev; + VuVirtq *q = vu_get_queue(dev, qi->qidx); + + vu_dispatch_rdlock(qi->virtio_dev); + pthread_mutex_lock(&qi->vq_lock); + vu_queue_push(dev, q, elem, len); + vu_queue_notify(dev, q); + pthread_mutex_unlock(&qi->vq_lock); + vu_dispatch_unlock(qi->virtio_dev); +} + /* * Called back by ll whenever it wants to send a reply/message back * The 1st element of the iov starts with the fuse_out_header @@ -253,8 +268,6 @@ int virtio_send_msg(struct fuse_session *se, struct fuse_chan *ch, { FVRequest *req = container_of(ch, FVRequest, ch); struct fv_QueueInfo *qi = ch->qi; - VuDev *dev = &se->virtio_dev->dev; - VuVirtq *q = vu_get_queue(dev, qi->qidx); VuVirtqElement *elem = &req->elem; int ret = 0; @@ -296,13 +309,7 @@ int virtio_send_msg(struct fuse_session *se, struct fuse_chan *ch, copy_iov(iov, count, in_sg, in_num, tosend_len); - vu_dispatch_rdlock(qi->virtio_dev); - pthread_mutex_lock(&qi->vq_lock); - vu_queue_push(dev, q, elem, tosend_len); - vu_queue_notify(dev, q); - pthread_mutex_unlock(&qi->vq_lock); - vu_dispatch_unlock(qi->virtio_dev); - + vq_send_element(qi, elem, tosend_len); req->reply_sent = true; err: @@ -321,8 +328,6 @@ int virtio_send_data_iov(struct fuse_session *se, struct fuse_chan *ch, { FVRequest *req = container_of(ch, FVRequest, ch); struct fv_QueueInfo *qi = ch->qi; - VuDev *dev = &se->virtio_dev->dev; - VuVirtq *q = vu_get_queue(dev, qi->qidx); VuVirtqElement *elem = &req->elem; int ret = 0; g_autofree struct iovec *in_sg_cpy = NULL; @@ -430,12 +435,7 @@ int virtio_send_data_iov(struct fuse_session *se, struct fuse_chan *ch, out_sg->len = tosend_len; } - vu_dispatch_rdlock(qi->virtio_dev); - pthread_mutex_lock(&qi->vq_lock); - vu_queue_push(dev, q, elem, tosend_len); - vu_queue_notify(dev, q); - pthread_mutex_unlock(&qi->vq_lock); - vu_dispatch_unlock(qi->virtio_dev); + vq_send_element(qi, elem, tosend_len); req->reply_sent = true; return 0; } @@ -447,7 +447,6 @@ static void fv_queue_worker(gpointer data, gpointer user_data) { struct fv_QueueInfo *qi = user_data; struct fuse_session *se = qi->virtio_dev->se; - struct VuDev *dev = &qi->virtio_dev->dev; FVRequest *req = data; VuVirtqElement *elem = &req->elem; struct fuse_buf fbuf = {}; @@ -589,17 +588,9 @@ out: /* If the request has no reply, still recycle the virtqueue element */ if (!req->reply_sent) { - struct VuVirtq *q = vu_get_queue(dev, qi->qidx); - fuse_log(FUSE_LOG_DEBUG, "%s: elem %d no reply sent\n", __func__, elem->index); - - vu_dispatch_rdlock(qi->virtio_dev); - pthread_mutex_lock(&qi->vq_lock); - vu_queue_push(dev, q, elem, 0); - vu_queue_notify(dev, q); - pthread_mutex_unlock(&qi->vq_lock); - vu_dispatch_unlock(qi->virtio_dev); + vq_send_element(qi, elem, 0); } pthread_mutex_destroy(&req->ch.lock); From patchwork Thu Sep 30 15:30:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vivek Goyal X-Patchwork-Id: 12528829 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21ACBC433F5 for ; Thu, 30 Sep 2021 16:08:32 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9FE65613A9 for ; Thu, 30 Sep 2021 16:08:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 9FE65613A9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=nongnu.org Received: from localhost ([::1]:41112 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mVybm-0004zf-Mk for qemu-devel@archiver.kernel.org; Thu, 30 Sep 2021 12:08:30 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:36792) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mVy1Z-00051l-De for qemu-devel@nongnu.org; Thu, 30 Sep 2021 11:31:05 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:44508) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mVy1W-0002TQ-JF for qemu-devel@nongnu.org; Thu, 30 Sep 2021 11:31:05 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1633015860; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Jz7/5rO1+w0ACgMJO7jQO0M9S+IExx5M+xRBrOnfa5Q=; b=g4nkcyczBOcxUcjgnLRjmxFaw+z9hpG7CoerhGwTZMQDlXB86Fe4a7nkfIaXdk1XKtbZ+n KLE6kPAaxS9p+a4t+ylUTJ89/iT8rKbFQ8qhluokoXIB0jXhym7VOdPFLqvId5zh8ztYLV G8A2/xeSpTx9PWDDZKIHI3P4+CUy4U4= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-3-W_R9ST7XNTq6F0IG91Zb6g-1; Thu, 30 Sep 2021 11:30:57 -0400 X-MC-Unique: W_R9ST7XNTq6F0IG91Zb6g-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 37A2084A5E1; Thu, 30 Sep 2021 15:30:56 +0000 (UTC) Received: from horse.redhat.com (unknown [10.22.16.146]) by smtp.corp.redhat.com (Postfix) with ESMTP id D0C9A1009962; Thu, 30 Sep 2021 15:30:48 +0000 (UTC) Received: by horse.redhat.com (Postfix, from userid 10451) id DBF9E228284; Thu, 30 Sep 2021 11:30:47 -0400 (EDT) From: Vivek Goyal To: qemu-devel@nongnu.org, virtio-fs@redhat.com, stefanha@redhat.com Subject: [PATCH 05/13] virtiofsd: Add a helper to stop all queues Date: Thu, 30 Sep 2021 11:30:29 -0400 Message-Id: <20210930153037.1194279-6-vgoyal@redhat.com> In-Reply-To: <20210930153037.1194279-1-vgoyal@redhat.com> References: <20210930153037.1194279-1-vgoyal@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=vgoyal@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=170.10.129.124; envelope-from=vgoyal@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jaggel@bu.edu, iangelak@redhat.com, dgilbert@redhat.com, vgoyal@redhat.com, miklos@szeredi.hu Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Use a helper to stop all the queues. Later in the patch series I am planning to use this helper at one more place later in the patch series. Signed-off-by: Vivek Goyal Reviewed-by: Stefan Hajnoczi --- tools/virtiofsd/fuse_virtio.c | 22 +++++++++++++--------- 1 file changed, 13 insertions(+), 9 deletions(-) diff --git a/tools/virtiofsd/fuse_virtio.c b/tools/virtiofsd/fuse_virtio.c index fcf12db9cd..baead08b28 100644 --- a/tools/virtiofsd/fuse_virtio.c +++ b/tools/virtiofsd/fuse_virtio.c @@ -740,6 +740,18 @@ static void fv_queue_cleanup_thread(struct fv_VuDev *vud, int qidx) vud->qi[qidx] = NULL; } +static void stop_all_queues(struct fv_VuDev *vud) +{ + for (int i = 0; i < vud->nqueues; i++) { + if (!vud->qi[i]) { + continue; + } + + fuse_log(FUSE_LOG_INFO, "%s: Stopping queue %d thread\n", __func__, i); + fv_queue_cleanup_thread(vud, i); + } +} + /* Callback from libvhost-user on start or stop of a queue */ static void fv_queue_set_started(VuDev *dev, int qidx, bool started) { @@ -870,15 +882,7 @@ int virtio_loop(struct fuse_session *se) * Make sure all fv_queue_thread()s quit on exit, as we're about to * free virtio dev and fuse session, no one should access them anymore. */ - for (int i = 0; i < se->virtio_dev->nqueues; i++) { - if (!se->virtio_dev->qi[i]) { - continue; - } - - fuse_log(FUSE_LOG_INFO, "%s: Stopping queue %d thread\n", __func__, i); - fv_queue_cleanup_thread(se->virtio_dev, i); - } - + stop_all_queues(se->virtio_dev); fuse_log(FUSE_LOG_INFO, "%s: Exit\n", __func__); return 0; From patchwork Thu Sep 30 15:30:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vivek Goyal X-Patchwork-Id: 12528737 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A8EBC433F5 for ; Thu, 30 Sep 2021 15:39:09 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 34457613A7 for ; Thu, 30 Sep 2021 15:39:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 34457613A7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=nongnu.org Received: from localhost ([::1]:57068 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mVy9M-0000Mj-7Q for qemu-devel@archiver.kernel.org; Thu, 30 Sep 2021 11:39:08 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:37030) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mVy23-0005Ze-E0 for qemu-devel@nongnu.org; Thu, 30 Sep 2021 11:31:35 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:51462) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mVy20-0002rA-HY for qemu-devel@nongnu.org; Thu, 30 Sep 2021 11:31:35 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1633015892; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=v4RwjsBN+VD0misqMm0HefaBLHKqBs/oGV+OSDFCfxU=; b=aSi1pKJLeLtjOjT/ZXOpEIzDzxWtH5KG/Dhwl5OC4SIhGZ54ymiTehfsFC7WIFDjTHUcIt KxNqjaQGPhfpkK9bqgOveO4TxEqKR2j30NVMWxvFdgUD6CW9vB0qjL+3XY9PkXNUd6cFyc kUxmFoSMYsEElhNuqo37s6zp73rQHgU= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-473-e9qtjP_VPByfx6WuSDdhCQ-1; Thu, 30 Sep 2021 11:31:30 -0400 X-MC-Unique: e9qtjP_VPByfx6WuSDdhCQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 09DD6DF8A3; Thu, 30 Sep 2021 15:31:29 +0000 (UTC) Received: from horse.redhat.com (unknown [10.22.16.146]) by smtp.corp.redhat.com (Postfix) with ESMTP id D0D6F60657; Thu, 30 Sep 2021 15:30:48 +0000 (UTC) Received: by horse.redhat.com (Postfix, from userid 10451) id E33B4228285; Thu, 30 Sep 2021 11:30:47 -0400 (EDT) From: Vivek Goyal To: qemu-devel@nongnu.org, virtio-fs@redhat.com, stefanha@redhat.com Subject: [PATCH 06/13] vhost-user-fs: Use helpers to create/cleanup virtqueue Date: Thu, 30 Sep 2021 11:30:30 -0400 Message-Id: <20210930153037.1194279-7-vgoyal@redhat.com> In-Reply-To: <20210930153037.1194279-1-vgoyal@redhat.com> References: <20210930153037.1194279-1-vgoyal@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=vgoyal@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=216.205.24.124; envelope-from=vgoyal@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jaggel@bu.edu, iangelak@redhat.com, dgilbert@redhat.com, vgoyal@redhat.com, miklos@szeredi.hu Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Add helpers to create/cleanup virtuqueues and use those helpers. I will need to reconfigure queues in later patches and using helpers will allow reusing the code. Signed-off-by: Vivek Goyal Reviewed-by: Stefan Hajnoczi --- hw/virtio/vhost-user-fs.c | 87 +++++++++++++++++++++++---------------- 1 file changed, 52 insertions(+), 35 deletions(-) diff --git a/hw/virtio/vhost-user-fs.c b/hw/virtio/vhost-user-fs.c index c595957983..d1efbc5b18 100644 --- a/hw/virtio/vhost-user-fs.c +++ b/hw/virtio/vhost-user-fs.c @@ -139,6 +139,55 @@ static void vuf_set_status(VirtIODevice *vdev, uint8_t status) } } +static void vuf_handle_output(VirtIODevice *vdev, VirtQueue *vq) +{ + /* + * Not normally called; it's the daemon that handles the queue; + * however virtio's cleanup path can call this. + */ +} + +static void vuf_create_vqs(VirtIODevice *vdev) +{ + VHostUserFS *fs = VHOST_USER_FS(vdev); + unsigned int i; + + /* Hiprio queue */ + fs->hiprio_vq = virtio_add_queue(vdev, fs->conf.queue_size, + vuf_handle_output); + + /* Request queues */ + fs->req_vqs = g_new(VirtQueue *, fs->conf.num_request_queues); + for (i = 0; i < fs->conf.num_request_queues; i++) { + fs->req_vqs[i] = virtio_add_queue(vdev, fs->conf.queue_size, + vuf_handle_output); + } + + /* 1 high prio queue, plus the number configured */ + fs->vhost_dev.nvqs = 1 + fs->conf.num_request_queues; + fs->vhost_dev.vqs = g_new0(struct vhost_virtqueue, fs->vhost_dev.nvqs); +} + +static void vuf_cleanup_vqs(VirtIODevice *vdev) +{ + VHostUserFS *fs = VHOST_USER_FS(vdev); + unsigned int i; + + virtio_delete_queue(fs->hiprio_vq); + fs->hiprio_vq = NULL; + + for (i = 0; i < fs->conf.num_request_queues; i++) { + virtio_delete_queue(fs->req_vqs[i]); + } + + g_free(fs->req_vqs); + fs->req_vqs = NULL; + + fs->vhost_dev.nvqs = 0; + g_free(fs->vhost_dev.vqs); + fs->vhost_dev.vqs = NULL; +} + static uint64_t vuf_get_features(VirtIODevice *vdev, uint64_t features, Error **errp) @@ -148,14 +197,6 @@ static uint64_t vuf_get_features(VirtIODevice *vdev, return vhost_get_features(&fs->vhost_dev, user_feature_bits, features); } -static void vuf_handle_output(VirtIODevice *vdev, VirtQueue *vq) -{ - /* - * Not normally called; it's the daemon that handles the queue; - * however virtio's cleanup path can call this. - */ -} - static void vuf_guest_notifier_mask(VirtIODevice *vdev, int idx, bool mask) { @@ -175,7 +216,6 @@ static void vuf_device_realize(DeviceState *dev, Error **errp) { VirtIODevice *vdev = VIRTIO_DEVICE(dev); VHostUserFS *fs = VHOST_USER_FS(dev); - unsigned int i; size_t len; int ret; @@ -222,18 +262,7 @@ static void vuf_device_realize(DeviceState *dev, Error **errp) virtio_init(vdev, "vhost-user-fs", VIRTIO_ID_FS, sizeof(struct virtio_fs_config)); - /* Hiprio queue */ - fs->hiprio_vq = virtio_add_queue(vdev, fs->conf.queue_size, vuf_handle_output); - - /* Request queues */ - fs->req_vqs = g_new(VirtQueue *, fs->conf.num_request_queues); - for (i = 0; i < fs->conf.num_request_queues; i++) { - fs->req_vqs[i] = virtio_add_queue(vdev, fs->conf.queue_size, vuf_handle_output); - } - - /* 1 high prio queue, plus the number configured */ - fs->vhost_dev.nvqs = 1 + fs->conf.num_request_queues; - fs->vhost_dev.vqs = g_new0(struct vhost_virtqueue, fs->vhost_dev.nvqs); + vuf_create_vqs(vdev); ret = vhost_dev_init(&fs->vhost_dev, &fs->vhost_user, VHOST_BACKEND_TYPE_USER, 0, errp); if (ret < 0) { @@ -244,13 +273,8 @@ static void vuf_device_realize(DeviceState *dev, Error **errp) err_virtio: vhost_user_cleanup(&fs->vhost_user); - virtio_delete_queue(fs->hiprio_vq); - for (i = 0; i < fs->conf.num_request_queues; i++) { - virtio_delete_queue(fs->req_vqs[i]); - } - g_free(fs->req_vqs); + vuf_cleanup_vqs(vdev); virtio_cleanup(vdev); - g_free(fs->vhost_dev.vqs); return; } @@ -258,7 +282,6 @@ static void vuf_device_unrealize(DeviceState *dev) { VirtIODevice *vdev = VIRTIO_DEVICE(dev); VHostUserFS *fs = VHOST_USER_FS(dev); - int i; /* This will stop vhost backend if appropriate. */ vuf_set_status(vdev, 0); @@ -267,14 +290,8 @@ static void vuf_device_unrealize(DeviceState *dev) vhost_user_cleanup(&fs->vhost_user); - virtio_delete_queue(fs->hiprio_vq); - for (i = 0; i < fs->conf.num_request_queues; i++) { - virtio_delete_queue(fs->req_vqs[i]); - } - g_free(fs->req_vqs); + vuf_cleanup_vqs(vdev); virtio_cleanup(vdev); - g_free(fs->vhost_dev.vqs); - fs->vhost_dev.vqs = NULL; } static const VMStateDescription vuf_vmstate = { From patchwork Thu Sep 30 15:30:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vivek Goyal X-Patchwork-Id: 12528843 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F349C433EF for ; Thu, 30 Sep 2021 16:17:37 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B77D9613CE for ; Thu, 30 Sep 2021 16:17:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B77D9613CE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=nongnu.org Received: from localhost ([::1]:58104 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mVykZ-0000MG-Mn for qemu-devel@archiver.kernel.org; Thu, 30 Sep 2021 12:17:35 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:37056) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mVy28-0005hi-AC for qemu-devel@nongnu.org; Thu, 30 Sep 2021 11:31:41 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:41391) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mVy21-0002r3-ED for qemu-devel@nongnu.org; Thu, 30 Sep 2021 11:31:40 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1633015891; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Wq085S62m4NMqg3+3BFaX2LXkZ67M8Cr6kIWLXEhzWU=; b=AhafCGXFqxJi+nKOIXaa8/wsoxTWd7CcY397aRkrxjEQbRbUaX9oxS0fE9xOQiUzXJ/oL8 k5XSdnwae3YMvZj/X/drR7MIxKVr8w4ucCplHIKz1dmSdIM5KiVJXls6fKzSiNJIcOC21+ 5Vxyc9i2uV6qJlMTqVJRSNo1O5ZcxPs= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-246-H93OStsJOcSEjO-iM00emQ-1; Thu, 30 Sep 2021 11:31:29 -0400 X-MC-Unique: H93OStsJOcSEjO-iM00emQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E403D10168C0; Thu, 30 Sep 2021 15:31:28 +0000 (UTC) Received: from horse.redhat.com (unknown [10.22.16.146]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9582218B5E; Thu, 30 Sep 2021 15:30:56 +0000 (UTC) Received: by horse.redhat.com (Postfix, from userid 10451) id E826B228286; Thu, 30 Sep 2021 11:30:47 -0400 (EDT) From: Vivek Goyal To: qemu-devel@nongnu.org, virtio-fs@redhat.com, stefanha@redhat.com Subject: [PATCH 07/13] virtiofsd: Release file locks using F_UNLCK Date: Thu, 30 Sep 2021 11:30:31 -0400 Message-Id: <20210930153037.1194279-8-vgoyal@redhat.com> In-Reply-To: <20210930153037.1194279-1-vgoyal@redhat.com> References: <20210930153037.1194279-1-vgoyal@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=vgoyal@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=170.10.133.124; envelope-from=vgoyal@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jaggel@bu.edu, iangelak@redhat.com, dgilbert@redhat.com, vgoyal@redhat.com, miklos@szeredi.hu Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" We are emulating posix locks for guest using open file description locks in virtiofsd. When any of the fd is closed in guest, we find associated OFD lock fd (if there is one) and close it to release all the locks. Assumption here is that there is no other thread using lo_inode_plock structure or plock->fd, hence it is safe to do so. But now we are about to introduce blocking variant of locks (SETLKW), and that means we might be waiting to a lock to be available and using plock->fd. And that means there are still users of plock structure. So release locks using fcntl(SETLK, F_UNLCK) instead of closing fd and plock will be freed later when lo_inode is being freed. Signed-off-by: Vivek Goyal Signed-off-by: Ioannis Angelakopoulos --- tools/virtiofsd/passthrough_ll.c | 21 +++++++++++++++++---- 1 file changed, 17 insertions(+), 4 deletions(-) diff --git a/tools/virtiofsd/passthrough_ll.c b/tools/virtiofsd/passthrough_ll.c index 38b2af8599..6928662e22 100644 --- a/tools/virtiofsd/passthrough_ll.c +++ b/tools/virtiofsd/passthrough_ll.c @@ -1557,9 +1557,6 @@ static void unref_inode(struct lo_data *lo, struct lo_inode *inode, uint64_t n) lo_map_remove(&lo->ino_map, inode->fuse_ino); g_hash_table_remove(lo->inodes, &inode->key); if (lo->posix_lock) { - if (g_hash_table_size(inode->posix_locks)) { - fuse_log(FUSE_LOG_WARNING, "Hash table is not empty\n"); - } g_hash_table_destroy(inode->posix_locks); pthread_mutex_destroy(&inode->plock_mutex); } @@ -2266,6 +2263,8 @@ static void lo_flush(fuse_req_t req, fuse_ino_t ino, struct fuse_file_info *fi) (void)ino; struct lo_inode *inode; struct lo_data *lo = lo_data(req); + struct lo_inode_plock *plock; + struct flock flock; inode = lo_inode(req, ino); if (!inode) { @@ -2282,8 +2281,22 @@ static void lo_flush(fuse_req_t req, fuse_ino_t ino, struct fuse_file_info *fi) /* An fd is going away. Cleanup associated posix locks */ if (lo->posix_lock) { pthread_mutex_lock(&inode->plock_mutex); - g_hash_table_remove(inode->posix_locks, + plock = g_hash_table_lookup(inode->posix_locks, GUINT_TO_POINTER(fi->lock_owner)); + + if (plock) { + /* + * An fd is being closed. For posix locks, this means + * drop all the associated locks. + */ + memset(&flock, 0, sizeof(struct flock)); + flock.l_type = F_UNLCK; + flock.l_whence = SEEK_SET; + /* Unlock whole file */ + flock.l_start = flock.l_len = 0; + fcntl(plock->fd, F_OFD_SETLK, &flock); + } + pthread_mutex_unlock(&inode->plock_mutex); } res = close(dup(lo_fi_fd(req, fi))); From patchwork Thu Sep 30 15:30:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vivek Goyal X-Patchwork-Id: 12528835 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29BCEC433F5 for ; Thu, 30 Sep 2021 16:11:55 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A40DE613A9 for ; Thu, 30 Sep 2021 16:11:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org A40DE613A9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=nongnu.org Received: from localhost ([::1]:48284 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mVyf2-0001it-NO for qemu-devel@archiver.kernel.org; Thu, 30 Sep 2021 12:11:52 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:36814) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mVy1a-00053f-DF for qemu-devel@nongnu.org; Thu, 30 Sep 2021 11:31:06 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:42387) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mVy1W-0002TS-43 for qemu-devel@nongnu.org; Thu, 30 Sep 2021 11:31:06 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1633015861; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=r/u/fjgANOyKBgiNvEeiUlOjEnrW34JQFDABOt3fZqg=; b=QsX514HyE9/L4kNC3I4bKyrENEYecWzzUcFlY+f9h7np62CMFcT+YAt6E/9dtNGVw65mQz Me9+EQCBJelVKGVeoc0jKu8YM07p0SAC+gjPL7w/KhGRI/ZYttVhlw7lniYavFjwUOW+S9 B4l1LlwWfDBPxOQxHz4JcYFp2LYzCD8= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-403-L2BBFy71ONKKyZpahOqAkA-1; Thu, 30 Sep 2021 11:30:57 -0400 X-MC-Unique: L2BBFy71ONKKyZpahOqAkA-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id AC29184A5E3; Thu, 30 Sep 2021 15:30:56 +0000 (UTC) Received: from horse.redhat.com (unknown [10.22.16.146]) by smtp.corp.redhat.com (Postfix) with ESMTP id 70B781024866; Thu, 30 Sep 2021 15:30:56 +0000 (UTC) Received: by horse.redhat.com (Postfix, from userid 10451) id ECC52228287; Thu, 30 Sep 2021 11:30:47 -0400 (EDT) From: Vivek Goyal To: qemu-devel@nongnu.org, virtio-fs@redhat.com, stefanha@redhat.com Subject: [PATCH 08/13] virtiofsd: Create a notification queue Date: Thu, 30 Sep 2021 11:30:32 -0400 Message-Id: <20210930153037.1194279-9-vgoyal@redhat.com> In-Reply-To: <20210930153037.1194279-1-vgoyal@redhat.com> References: <20210930153037.1194279-1-vgoyal@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=vgoyal@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=216.205.24.124; envelope-from=vgoyal@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -8 X-Spam_score: -0.9 X-Spam_bar: / X-Spam_report: (-0.9 / 5.0 requ) DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jaggel@bu.edu, iangelak@redhat.com, dgilbert@redhat.com, vgoyal@redhat.com, miklos@szeredi.hu Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Add a notification queue which will be used to send async notifications for file lock availability. Signed-off-by: Vivek Goyal Signed-off-by: Ioannis Angelakopoulos --- hw/virtio/vhost-user-fs-pci.c | 4 +- hw/virtio/vhost-user-fs.c | 62 +++++++++++++++++++++++++-- include/hw/virtio/vhost-user-fs.h | 2 + tools/virtiofsd/fuse_i.h | 1 + tools/virtiofsd/fuse_virtio.c | 70 +++++++++++++++++++++++-------- 5 files changed, 116 insertions(+), 23 deletions(-) diff --git a/hw/virtio/vhost-user-fs-pci.c b/hw/virtio/vhost-user-fs-pci.c index 2ed8492b3f..cdb9471088 100644 --- a/hw/virtio/vhost-user-fs-pci.c +++ b/hw/virtio/vhost-user-fs-pci.c @@ -41,8 +41,8 @@ static void vhost_user_fs_pci_realize(VirtIOPCIProxy *vpci_dev, Error **errp) DeviceState *vdev = DEVICE(&dev->vdev); if (vpci_dev->nvectors == DEV_NVECTORS_UNSPECIFIED) { - /* Also reserve config change and hiprio queue vectors */ - vpci_dev->nvectors = dev->vdev.conf.num_request_queues + 2; + /* Also reserve config change, hiprio and notification queue vectors */ + vpci_dev->nvectors = dev->vdev.conf.num_request_queues + 3; } qdev_realize(vdev, BUS(&vpci_dev->bus), errp); diff --git a/hw/virtio/vhost-user-fs.c b/hw/virtio/vhost-user-fs.c index d1efbc5b18..6bafcf0243 100644 --- a/hw/virtio/vhost-user-fs.c +++ b/hw/virtio/vhost-user-fs.c @@ -31,6 +31,7 @@ static const int user_feature_bits[] = { VIRTIO_F_NOTIFY_ON_EMPTY, VIRTIO_F_RING_PACKED, VIRTIO_F_IOMMU_PLATFORM, + VIRTIO_FS_F_NOTIFICATION, VHOST_INVALID_FEATURE_BIT }; @@ -147,7 +148,7 @@ static void vuf_handle_output(VirtIODevice *vdev, VirtQueue *vq) */ } -static void vuf_create_vqs(VirtIODevice *vdev) +static void vuf_create_vqs(VirtIODevice *vdev, bool notification_vq) { VHostUserFS *fs = VHOST_USER_FS(vdev); unsigned int i; @@ -155,6 +156,15 @@ static void vuf_create_vqs(VirtIODevice *vdev) /* Hiprio queue */ fs->hiprio_vq = virtio_add_queue(vdev, fs->conf.queue_size, vuf_handle_output); + /* + * Notification queue. Feature negotiation happens later. So at this + * point of time we don't know if driver will use notification queue + * or not. + */ + if (notification_vq) { + fs->notification_vq = virtio_add_queue(vdev, fs->conf.queue_size, + vuf_handle_output); + } /* Request queues */ fs->req_vqs = g_new(VirtQueue *, fs->conf.num_request_queues); @@ -163,8 +173,12 @@ static void vuf_create_vqs(VirtIODevice *vdev) vuf_handle_output); } - /* 1 high prio queue, plus the number configured */ - fs->vhost_dev.nvqs = 1 + fs->conf.num_request_queues; + /* 1 high prio queue, 1 notification queue plus the number configured */ + if (notification_vq) { + fs->vhost_dev.nvqs = 2 + fs->conf.num_request_queues; + } else { + fs->vhost_dev.nvqs = 1 + fs->conf.num_request_queues; + } fs->vhost_dev.vqs = g_new0(struct vhost_virtqueue, fs->vhost_dev.nvqs); } @@ -176,6 +190,11 @@ static void vuf_cleanup_vqs(VirtIODevice *vdev) virtio_delete_queue(fs->hiprio_vq); fs->hiprio_vq = NULL; + if (fs->notification_vq) { + virtio_delete_queue(fs->notification_vq); + } + fs->notification_vq = NULL; + for (i = 0; i < fs->conf.num_request_queues; i++) { virtio_delete_queue(fs->req_vqs[i]); } @@ -194,9 +213,43 @@ static uint64_t vuf_get_features(VirtIODevice *vdev, { VHostUserFS *fs = VHOST_USER_FS(vdev); + virtio_add_feature(&features, VIRTIO_FS_F_NOTIFICATION); + return vhost_get_features(&fs->vhost_dev, user_feature_bits, features); } +static void vuf_set_features(VirtIODevice *vdev, uint64_t features) +{ + VHostUserFS *fs = VHOST_USER_FS(vdev); + + if (virtio_has_feature(features, VIRTIO_FS_F_NOTIFICATION)) { + fs->notify_enabled = true; + /* + * If guest first booted with no notification queue support and + * later rebooted with kernel which supports notification, we + * can end up here + */ + if (!fs->notification_vq) { + vuf_cleanup_vqs(vdev); + vuf_create_vqs(vdev, true); + } + return; + } + + fs->notify_enabled = false; + if (!fs->notification_vq) { + return; + } + /* + * Driver does not support notification queue. Reconfigure queues + * and do not create notification queue. + */ + vuf_cleanup_vqs(vdev); + + /* Create queues again */ + vuf_create_vqs(vdev, false); +} + static void vuf_guest_notifier_mask(VirtIODevice *vdev, int idx, bool mask) { @@ -262,7 +315,7 @@ static void vuf_device_realize(DeviceState *dev, Error **errp) virtio_init(vdev, "vhost-user-fs", VIRTIO_ID_FS, sizeof(struct virtio_fs_config)); - vuf_create_vqs(vdev); + vuf_create_vqs(vdev, true); ret = vhost_dev_init(&fs->vhost_dev, &fs->vhost_user, VHOST_BACKEND_TYPE_USER, 0, errp); if (ret < 0) { @@ -327,6 +380,7 @@ static void vuf_class_init(ObjectClass *klass, void *data) vdc->realize = vuf_device_realize; vdc->unrealize = vuf_device_unrealize; vdc->get_features = vuf_get_features; + vdc->set_features = vuf_set_features; vdc->get_config = vuf_get_config; vdc->set_status = vuf_set_status; vdc->guest_notifier_mask = vuf_guest_notifier_mask; diff --git a/include/hw/virtio/vhost-user-fs.h b/include/hw/virtio/vhost-user-fs.h index 0d62834c25..95dc0dd402 100644 --- a/include/hw/virtio/vhost-user-fs.h +++ b/include/hw/virtio/vhost-user-fs.h @@ -39,7 +39,9 @@ struct VHostUserFS { VhostUserState vhost_user; VirtQueue **req_vqs; VirtQueue *hiprio_vq; + VirtQueue *notification_vq; int32_t bootindex; + bool notify_enabled; /*< public >*/ }; diff --git a/tools/virtiofsd/fuse_i.h b/tools/virtiofsd/fuse_i.h index 492e002181..4942d080da 100644 --- a/tools/virtiofsd/fuse_i.h +++ b/tools/virtiofsd/fuse_i.h @@ -73,6 +73,7 @@ struct fuse_session { int vu_socketfd; struct fv_VuDev *virtio_dev; int thread_pool_size; + bool notify_enabled; }; struct fuse_chan { diff --git a/tools/virtiofsd/fuse_virtio.c b/tools/virtiofsd/fuse_virtio.c index baead08b28..f5b87a508a 100644 --- a/tools/virtiofsd/fuse_virtio.c +++ b/tools/virtiofsd/fuse_virtio.c @@ -14,6 +14,7 @@ #include "qemu/osdep.h" #include "qemu/iov.h" #include "qapi/error.h" +#include "standard-headers/linux/virtio_fs.h" #include "fuse_i.h" #include "standard-headers/linux/fuse.h" #include "fuse_misc.h" @@ -85,12 +86,25 @@ struct fv_VuDev { /* Callback from libvhost-user */ static uint64_t fv_get_features(VuDev *dev) { - return 1ULL << VIRTIO_F_VERSION_1; + uint64_t features; + + features = 1ull << VIRTIO_F_VERSION_1 | + 1ull << VIRTIO_FS_F_NOTIFICATION; + + return features; } /* Callback from libvhost-user */ static void fv_set_features(VuDev *dev, uint64_t features) { + struct fv_VuDev *vud = container_of(dev, struct fv_VuDev, dev); + struct fuse_session *se = vud->se; + + if ((1ull << VIRTIO_FS_F_NOTIFICATION) & features) { + se->notify_enabled = true; + } else { + se->notify_enabled = false; + } } /* @@ -719,22 +733,25 @@ static void fv_queue_cleanup_thread(struct fv_VuDev *vud, int qidx) { int ret; struct fv_QueueInfo *ourqi; + struct fuse_session *se = vud->se; assert(qidx < vud->nqueues); ourqi = vud->qi[qidx]; - /* Kill the thread */ - if (eventfd_write(ourqi->kill_fd, 1)) { - fuse_log(FUSE_LOG_ERR, "Eventfd_write for queue %d: %s\n", - qidx, strerror(errno)); - } - ret = pthread_join(ourqi->thread, NULL); - if (ret) { - fuse_log(FUSE_LOG_ERR, "%s: Failed to join thread idx %d err %d\n", - __func__, qidx, ret); + /* qidx == 1 is the notification queue if notifications are enabled */ + if (!se->notify_enabled || qidx != 1) { + /* Kill the thread */ + if (eventfd_write(ourqi->kill_fd, 1)) { + fuse_log(FUSE_LOG_ERR, "Eventfd_read for queue: %m\n"); + } + ret = pthread_join(ourqi->thread, NULL); + if (ret) { + fuse_log(FUSE_LOG_ERR, "%s: Failed to join thread idx %d err" + " %d\n", __func__, qidx, ret); + } + close(ourqi->kill_fd); } pthread_mutex_destroy(&ourqi->vq_lock); - close(ourqi->kill_fd); ourqi->kick_fd = -1; g_free(vud->qi[qidx]); vud->qi[qidx] = NULL; @@ -757,6 +774,9 @@ static void fv_queue_set_started(VuDev *dev, int qidx, bool started) { struct fv_VuDev *vud = container_of(dev, struct fv_VuDev, dev); struct fv_QueueInfo *ourqi; + int valid_queues = 2; /* One hiprio queue and one request queue */ + bool notification_q = false; + struct fuse_session *se = vud->se; fuse_log(FUSE_LOG_INFO, "%s: qidx=%d started=%d\n", __func__, qidx, started); @@ -768,10 +788,19 @@ static void fv_queue_set_started(VuDev *dev, int qidx, bool started) * well-behaved client in mind and may not protect against all types of * races yet. */ - if (qidx > 1) { - fuse_log(FUSE_LOG_ERR, - "%s: multiple request queues not yet implemented, please only " - "configure 1 request queue\n", + if (se->notify_enabled) { + valid_queues++; + /* + * If notification queue is enabled, then qidx 1 is notificaiton queue. + */ + if (qidx == 1) { + notification_q = true; + } + } + + if (qidx >= valid_queues) { + fuse_log(FUSE_LOG_ERR, "%s: multiple request queues not yet" + "implemented, please only configure 1 request queue\n", __func__); exit(EXIT_FAILURE); } @@ -793,11 +822,18 @@ static void fv_queue_set_started(VuDev *dev, int qidx, bool started) assert(vud->qi[qidx]->kick_fd == -1); } ourqi = vud->qi[qidx]; + pthread_mutex_init(&ourqi->vq_lock, NULL); + /* + * For notification queue, we don't have to start a thread yet. + */ + if (notification_q) { + return; + } + ourqi->kick_fd = dev->vq[qidx].kick_fd; ourqi->kill_fd = eventfd(0, EFD_CLOEXEC | EFD_SEMAPHORE); assert(ourqi->kill_fd != -1); - pthread_mutex_init(&ourqi->vq_lock, NULL); if (pthread_create(&ourqi->thread, NULL, fv_queue_thread, ourqi)) { fuse_log(FUSE_LOG_ERR, "%s: Failed to create thread for queue %d\n", @@ -1048,7 +1084,7 @@ int virtio_session_mount(struct fuse_session *se) se->vu_socketfd = data_sock; se->virtio_dev->se = se; pthread_rwlock_init(&se->virtio_dev->vu_dispatch_rwlock, NULL); - if (!vu_init(&se->virtio_dev->dev, 2, se->vu_socketfd, fv_panic, NULL, + if (!vu_init(&se->virtio_dev->dev, 3, se->vu_socketfd, fv_panic, NULL, fv_set_watch, fv_remove_watch, &fv_iface)) { fuse_log(FUSE_LOG_ERR, "%s: vu_init failed\n", __func__); return -1; From patchwork Thu Sep 30 15:30:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vivek Goyal X-Patchwork-Id: 12528747 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C24CC433F5 for ; Thu, 30 Sep 2021 15:42:56 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1582A613D0 for ; Thu, 30 Sep 2021 15:42:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 1582A613D0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=nongnu.org Received: from localhost ([::1]:38026 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mVyD1-0006kb-4m for qemu-devel@archiver.kernel.org; Thu, 30 Sep 2021 11:42:55 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:37068) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mVy2B-0005nY-Gd for qemu-devel@nongnu.org; Thu, 30 Sep 2021 11:31:43 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:33265) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mVy29-0002yM-ER for qemu-devel@nongnu.org; Thu, 30 Sep 2021 11:31:43 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1633015900; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7s1zVOLCa+EqNrNjVD5AqhiHtdWNtcDXT2NvEyfahq4=; b=K1bebCnIN3rQLoPtejrgzZA/6JPxyh6oqmGV1vRmvSdVwJS85oIN8Db8ZWC5hEKP+6X+KP 0/vgx7YTrBafBo8zTMrQ4fic0LItZaaa0TqHjIo3y9LGBmNiIz4V7tDyqxZvfrh07P/e0O 8buSP9lo9UJSNlVcUZxTh8m6R5gs8Co= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-564-yGFgfm6LOJSwsOPGKehGFA-1; Thu, 30 Sep 2021 11:31:39 -0400 X-MC-Unique: yGFgfm6LOJSwsOPGKehGFA-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 7688C1006AA7; Thu, 30 Sep 2021 15:31:38 +0000 (UTC) Received: from horse.redhat.com (unknown [10.22.16.146]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9587D60CC6; Thu, 30 Sep 2021 15:30:56 +0000 (UTC) Received: by horse.redhat.com (Postfix, from userid 10451) id F1906228288; Thu, 30 Sep 2021 11:30:47 -0400 (EDT) From: Vivek Goyal To: qemu-devel@nongnu.org, virtio-fs@redhat.com, stefanha@redhat.com Subject: [PATCH 09/13] virtiofsd: Specify size of notification buffer using config space Date: Thu, 30 Sep 2021 11:30:33 -0400 Message-Id: <20210930153037.1194279-10-vgoyal@redhat.com> In-Reply-To: <20210930153037.1194279-1-vgoyal@redhat.com> References: <20210930153037.1194279-1-vgoyal@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=vgoyal@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=170.10.133.124; envelope-from=vgoyal@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jaggel@bu.edu, iangelak@redhat.com, dgilbert@redhat.com, vgoyal@redhat.com, miklos@szeredi.hu Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Daemon specifies size of notification buffer needed and that should be done using config space. Only ->notify_buf_size value of config space comes from daemon. Rest of it is filled by qemu device emulation code. Signed-off-by: Vivek Goyal Signed-off-by: Ioannis Angelakopoulos --- hw/virtio/vhost-user-fs.c | 27 +++++++++++++++++++ include/hw/virtio/vhost-user-fs.h | 2 ++ include/standard-headers/linux/virtio_fs.h | 2 ++ tools/virtiofsd/fuse_virtio.c | 31 ++++++++++++++++++++++ 4 files changed, 62 insertions(+) diff --git a/hw/virtio/vhost-user-fs.c b/hw/virtio/vhost-user-fs.c index 6bafcf0243..68a94708b4 100644 --- a/hw/virtio/vhost-user-fs.c +++ b/hw/virtio/vhost-user-fs.c @@ -36,15 +36,41 @@ static const int user_feature_bits[] = { VHOST_INVALID_FEATURE_BIT }; +static int vhost_user_fs_handle_config_change(struct vhost_dev *dev) +{ + return 0; +} + +const VhostDevConfigOps fs_ops = { + .vhost_dev_config_notifier = vhost_user_fs_handle_config_change, +}; + static void vuf_get_config(VirtIODevice *vdev, uint8_t *config) { VHostUserFS *fs = VHOST_USER_FS(vdev); struct virtio_fs_config fscfg = {}; + Error *local_err = NULL; + int ret; + + /* + * As of now we only get notification buffer size from device. And that's + * needed only if notification queue is enabled. + */ + if (fs->notify_enabled) { + ret = vhost_dev_get_config(&fs->vhost_dev, (uint8_t *)&fs->fscfg, + sizeof(struct virtio_fs_config), + &local_err); + if (ret) { + error_report_err(local_err); + return; + } + } memcpy((char *)fscfg.tag, fs->conf.tag, MIN(strlen(fs->conf.tag) + 1, sizeof(fscfg.tag))); virtio_stl_p(vdev, &fscfg.num_request_queues, fs->conf.num_request_queues); + virtio_stl_p(vdev, &fscfg.notify_buf_size, fs->fscfg.notify_buf_size); memcpy(config, &fscfg, sizeof(fscfg)); } @@ -316,6 +342,7 @@ static void vuf_device_realize(DeviceState *dev, Error **errp) sizeof(struct virtio_fs_config)); vuf_create_vqs(vdev, true); + vhost_dev_set_config_notifier(&fs->vhost_dev, &fs_ops); ret = vhost_dev_init(&fs->vhost_dev, &fs->vhost_user, VHOST_BACKEND_TYPE_USER, 0, errp); if (ret < 0) { diff --git a/include/hw/virtio/vhost-user-fs.h b/include/hw/virtio/vhost-user-fs.h index 95dc0dd402..3b114ee260 100644 --- a/include/hw/virtio/vhost-user-fs.h +++ b/include/hw/virtio/vhost-user-fs.h @@ -14,6 +14,7 @@ #ifndef _QEMU_VHOST_USER_FS_H #define _QEMU_VHOST_USER_FS_H +#include "standard-headers/linux/virtio_fs.h" #include "hw/virtio/virtio.h" #include "hw/virtio/vhost.h" #include "hw/virtio/vhost-user.h" @@ -37,6 +38,7 @@ struct VHostUserFS { struct vhost_virtqueue *vhost_vqs; struct vhost_dev vhost_dev; VhostUserState vhost_user; + struct virtio_fs_config fscfg; VirtQueue **req_vqs; VirtQueue *hiprio_vq; VirtQueue *notification_vq; diff --git a/include/standard-headers/linux/virtio_fs.h b/include/standard-headers/linux/virtio_fs.h index b7f015186e..867d18acf6 100644 --- a/include/standard-headers/linux/virtio_fs.h +++ b/include/standard-headers/linux/virtio_fs.h @@ -17,6 +17,8 @@ struct virtio_fs_config { /* Number of request queues */ uint32_t num_request_queues; + /* Size of notification buffer */ + uint32_t notify_buf_size; } QEMU_PACKED; /* For the id field in virtio_pci_shm_cap */ diff --git a/tools/virtiofsd/fuse_virtio.c b/tools/virtiofsd/fuse_virtio.c index f5b87a508a..3b720c5d4a 100644 --- a/tools/virtiofsd/fuse_virtio.c +++ b/tools/virtiofsd/fuse_virtio.c @@ -856,6 +856,35 @@ static bool fv_queue_order(VuDev *dev, int qidx) return false; } +static uint64_t fv_get_protocol_features(VuDev *dev) +{ + return 1ull << VHOST_USER_PROTOCOL_F_CONFIG; +} + +static int fv_get_config(VuDev *dev, uint8_t *config, uint32_t len) +{ + struct virtio_fs_config fscfg = {}; + unsigned notify_size, roundto = 64; + union fuse_notify_union { + struct fuse_notify_poll_wakeup_out wakeup_out; + struct fuse_notify_inval_inode_out inode_out; + struct fuse_notify_inval_entry_out entry_out; + struct fuse_notify_delete_out delete_out; + struct fuse_notify_store_out store_out; + struct fuse_notify_retrieve_out retrieve_out; + }; + + notify_size = sizeof(struct fuse_out_header) + + sizeof(union fuse_notify_union); + notify_size = ((notify_size + roundto) / roundto) * roundto; + + fscfg.notify_buf_size = notify_size; + memcpy(config, &fscfg, len); + fuse_log(FUSE_LOG_DEBUG, "%s:Setting notify_buf_size=%d\n", __func__, + fscfg.notify_buf_size); + return 0; +} + static const VuDevIface fv_iface = { .get_features = fv_get_features, .set_features = fv_set_features, @@ -864,6 +893,8 @@ static const VuDevIface fv_iface = { .queue_set_started = fv_queue_set_started, .queue_is_processed_in_order = fv_queue_order, + .get_protocol_features = fv_get_protocol_features, + .get_config = fv_get_config, }; /* From patchwork Thu Sep 30 15:30:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vivek Goyal X-Patchwork-Id: 12528803 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80EE5C433F5 for ; Thu, 30 Sep 2021 16:00:56 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D8FA561440 for ; Thu, 30 Sep 2021 16:00:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D8FA561440 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=nongnu.org Received: from localhost ([::1]:52592 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mVyUQ-0001xY-Tm for qemu-devel@archiver.kernel.org; Thu, 30 Sep 2021 12:00:54 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:36852) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mVy1b-00056d-Tr for qemu-devel@nongnu.org; Thu, 30 Sep 2021 11:31:08 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:20293) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mVy1X-0002Tw-OV for qemu-devel@nongnu.org; Thu, 30 Sep 2021 11:31:07 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1633015862; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=R2D26EewOJqIyCYvNp9ySn0BwiktPwR3/fD2T227kKc=; b=E/d35znt//NIb8UtV3VREfGSAriQI8lshiwYj6nhn/8W8EQkQvOhGzOmBopnBtMHprLEMf DCSMxFmjDmDGZXCulHmciMkE378+ZqBIn1hXbSUXy5l/jJoxjirhZ24WLIITt/6QaJtj8F OKT5iLQYlhyvgHEXMmdyZ5RjCwSKezg= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-394-fHWYPcAzO96yCB-eh1S0IQ-1; Thu, 30 Sep 2021 11:30:57 -0400 X-MC-Unique: fHWYPcAzO96yCB-eh1S0IQ-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D54741966320; Thu, 30 Sep 2021 15:30:56 +0000 (UTC) Received: from horse.redhat.com (unknown [10.22.16.146]) by smtp.corp.redhat.com (Postfix) with ESMTP id 91A855C1D0; Thu, 30 Sep 2021 15:30:56 +0000 (UTC) Received: by horse.redhat.com (Postfix, from userid 10451) id 08818228289; Thu, 30 Sep 2021 11:30:48 -0400 (EDT) From: Vivek Goyal To: qemu-devel@nongnu.org, virtio-fs@redhat.com, stefanha@redhat.com Subject: [PATCH 10/13] virtiofsd: Custom threadpool for remote blocking posix locks requests Date: Thu, 30 Sep 2021 11:30:34 -0400 Message-Id: <20210930153037.1194279-11-vgoyal@redhat.com> In-Reply-To: <20210930153037.1194279-1-vgoyal@redhat.com> References: <20210930153037.1194279-1-vgoyal@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=vgoyal@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=170.10.133.124; envelope-from=vgoyal@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -8 X-Spam_score: -0.9 X-Spam_bar: / X-Spam_report: (-0.9 / 5.0 requ) DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jaggel@bu.edu, iangelak@redhat.com, dgilbert@redhat.com, vgoyal@redhat.com, miklos@szeredi.hu Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Add a new custom threadpool using posix threads that specifically service locking requests. In the case of a fcntl(SETLKW) request, if the guest is waiting for a lock or locks and issues a hard-reboot through SYSRQ then virtiofsd unblocks the blocked threads by sending a signal to them and waking them up. The current threadpool (GThreadPool) is not adequate to service the locking requests that result in a thread blocking. That is because GLib does not provide an API to cancel the request while it is serviced by a thread. In addition, a user might be running virtiofsd without a threadpool (--thread-pool-size=0), thus a locking request that blocks, will block the main virtqueue thread that services requests from servicing any other requests. The only exception occurs when the lock is of type F_UNLCK. In this case the request is serviced by the main virtqueue thread or a GThreadPool thread to avoid a deadlock, when all the threads in the custom threadpool are blocked. Then virtiofsd proceeds to cleanup the state of the threads, release them back to the system and re-initialize. Signed-off-by: Ioannis Angelakopoulos Signed-off-by: Vivek Goyal --- tools/virtiofsd/fuse_virtio.c | 90 ++++++- tools/virtiofsd/meson.build | 1 + tools/virtiofsd/passthrough_seccomp.c | 1 + tools/virtiofsd/tpool.c | 331 ++++++++++++++++++++++++++ tools/virtiofsd/tpool.h | 18 ++ 5 files changed, 440 insertions(+), 1 deletion(-) create mode 100644 tools/virtiofsd/tpool.c create mode 100644 tools/virtiofsd/tpool.h diff --git a/tools/virtiofsd/fuse_virtio.c b/tools/virtiofsd/fuse_virtio.c index 3b720c5d4a..c67c2e0e7a 100644 --- a/tools/virtiofsd/fuse_virtio.c +++ b/tools/virtiofsd/fuse_virtio.c @@ -20,6 +20,7 @@ #include "fuse_misc.h" #include "fuse_opt.h" #include "fuse_virtio.h" +#include "tpool.h" #include #include @@ -612,6 +613,60 @@ out: free(req); } +/* + * If the request is a locking request, use a custom locking thread pool. + */ +static bool use_lock_tpool(gpointer data, gpointer user_data) +{ + struct fv_QueueInfo *qi = user_data; + struct fuse_session *se = qi->virtio_dev->se; + FVRequest *req = data; + VuVirtqElement *elem = &req->elem; + struct fuse_buf fbuf = {}; + struct fuse_in_header *inhp; + struct fuse_lk_in *lkinp; + size_t lk_req_len; + /* The 'out' part of the elem is from qemu */ + unsigned int out_num = elem->out_num; + struct iovec *out_sg = elem->out_sg; + size_t out_len = iov_size(out_sg, out_num); + bool use_custom_tpool = false; + + /* + * If notifications are not enabled, no point in using cusotm lock + * thread pool. + */ + if (!se->notify_enabled) { + return false; + } + + assert(se->bufsize > sizeof(struct fuse_in_header)); + lk_req_len = sizeof(struct fuse_in_header) + sizeof(struct fuse_lk_in); + + if (out_len < lk_req_len) { + return false; + } + + fbuf.mem = g_malloc(se->bufsize); + copy_from_iov(&fbuf, out_num, out_sg, lk_req_len); + + inhp = fbuf.mem; + if (inhp->opcode != FUSE_SETLKW) { + goto out; + } + + lkinp = fbuf.mem + sizeof(struct fuse_in_header); + if (lkinp->lk.type == F_UNLCK) { + goto out; + } + + /* Its a blocking lock request. Use custom thread pool */ + use_custom_tpool = true; +out: + g_free(fbuf.mem); + return use_custom_tpool; +} + /* Thread function for individual queues, created when a queue is 'started' */ static void *fv_queue_thread(void *opaque) { @@ -619,6 +674,7 @@ static void *fv_queue_thread(void *opaque) struct VuDev *dev = &qi->virtio_dev->dev; struct VuVirtq *q = vu_get_queue(dev, qi->qidx); struct fuse_session *se = qi->virtio_dev->se; + struct fv_ThreadPool *lk_tpool = NULL; GThreadPool *pool = NULL; GList *req_list = NULL; @@ -631,6 +687,24 @@ static void *fv_queue_thread(void *opaque) fuse_log(FUSE_LOG_ERR, "%s: g_thread_pool_new failed\n", __func__); return NULL; } + + } + + /* + * Create the custom thread pool to handle blocking locking requests. + * Do not create for hiprio queue (qidx=0). + */ + if (qi->qidx) { + fuse_log(FUSE_LOG_DEBUG, "%s: Creating a locking thread pool for" + " Queue %d with size %d\n", __func__, qi->qidx, 4); + lk_tpool = fv_thread_pool_init(4); + if (!lk_tpool) { + fuse_log(FUSE_LOG_ERR, "%s: fv_thread_pool failed\n", __func__); + if (pool) { + g_thread_pool_free(pool, FALSE, TRUE); + } + return NULL; + } } fuse_log(FUSE_LOG_INFO, "%s: Start for queue %d kick_fd %d\n", __func__, @@ -703,7 +777,17 @@ static void *fv_queue_thread(void *opaque) req->reply_sent = false; - if (!se->thread_pool_size) { + /* + * In every case we get the opcode of the request and check if it + * is a locking request. If yes, we assign the request to the + * custom thread pool, with the exception when the lock is of type + * F_UNCLK. In this case to avoid a deadlock when all the custom + * threads are blocked, the request is serviced by the main + * virtqueue thread or a thread in GThreadPool + */ + if (use_lock_tpool(req, qi)) { + fv_thread_pool_push(lk_tpool, fv_queue_worker, req, qi); + } else if (!se->thread_pool_size) { req_list = g_list_prepend(req_list, req); } else { g_thread_pool_push(pool, req, NULL); @@ -726,6 +810,10 @@ static void *fv_queue_thread(void *opaque) g_thread_pool_free(pool, FALSE, TRUE); } + if (lk_tpool) { + fv_thread_pool_destroy(lk_tpool); + } + return NULL; } diff --git a/tools/virtiofsd/meson.build b/tools/virtiofsd/meson.build index c134ba633f..203cd5613a 100644 --- a/tools/virtiofsd/meson.build +++ b/tools/virtiofsd/meson.build @@ -6,6 +6,7 @@ executable('virtiofsd', files( 'fuse_signals.c', 'fuse_virtio.c', 'helper.c', + 'tpool.c', 'passthrough_ll.c', 'passthrough_seccomp.c'), dependencies: [seccomp, qemuutil, libcap_ng, vhost_user], diff --git a/tools/virtiofsd/passthrough_seccomp.c b/tools/virtiofsd/passthrough_seccomp.c index a3ce9f898d..cd24b40b78 100644 --- a/tools/virtiofsd/passthrough_seccomp.c +++ b/tools/virtiofsd/passthrough_seccomp.c @@ -116,6 +116,7 @@ static const int syscall_allowlist[] = { SCMP_SYS(write), SCMP_SYS(writev), SCMP_SYS(umask), + SCMP_SYS(nanosleep), }; /* Syscalls used when --syslog is enabled */ diff --git a/tools/virtiofsd/tpool.c b/tools/virtiofsd/tpool.c new file mode 100644 index 0000000000..f9aa41b0c5 --- /dev/null +++ b/tools/virtiofsd/tpool.c @@ -0,0 +1,331 @@ +/* + * custom threadpool for virtiofsd + * + * Copyright (C) 2021 Red Hat, Inc. + * + * Authors: + * Ioannis Angelakopoulos + * Vivek Goyal + * + * SPDX-License-Identifier: GPL-2.0-or-later + */ + +#include +#include +#include +#include +#include "tpool.h" +#include "fuse_log.h" + +struct fv_PoolReq { + struct fv_PoolReq *next; /* pointer to next task */ + void (*worker_func)(void *arg1, void *arg2); /* worker function */ + void *arg1; /* 1st arg: Request */ + void *arg2; /* 2nd arg: Virtqueue */ +}; + +struct fv_PoolReqQueue { + pthread_mutex_t lock; + GQueue queue; + pthread_cond_t notify; /* Conditional variable */ +}; + +struct fv_PoolThread { + pthread_t pthread; + int alive; + int id; + struct fv_ThreadPool *tpool; +}; + +struct fv_ThreadPool { + struct fv_PoolThread **threads; + struct fv_PoolReqQueue *req_queue; + pthread_mutex_t tp_lock; + + /* Total number of threads created */ + int num_threads; + + /* Number of threads running now */ + int nr_running; + int destroy_pool; +}; + +/* Initialize the Locking Request Queue */ +static struct fv_PoolReqQueue *fv_pool_request_queue_init(void) +{ + struct fv_PoolReqQueue *rq; + + rq = g_new0(struct fv_PoolReqQueue, 1); + pthread_mutex_init(&(rq->lock), NULL); + pthread_cond_init(&(rq->notify), NULL); + g_queue_init(&rq->queue); + return rq; +} + +/* Push a new locking request to the queue*/ +void fv_thread_pool_push(struct fv_ThreadPool *tpool, + void (*worker_func)(void *, void *), + void *arg1, void *arg2) +{ + struct fv_PoolReq *newreq; + struct fv_PoolReqQueue *rq = tpool->req_queue; + + newreq = g_new(struct fv_PoolReq, 1); + newreq->worker_func = worker_func; + newreq->arg1 = arg1; + newreq->arg2 = arg2; + newreq->next = NULL; + + /* Now add the request to the queue */ + pthread_mutex_lock(&rq->lock); + g_queue_push_tail(&rq->queue, newreq); + + /* Notify the threads that a request is available */ + pthread_cond_signal(&rq->notify); + pthread_mutex_unlock(&rq->lock); + +} + +/* Pop a locking request from the queue*/ +static struct fv_PoolReq *fv_tpool_pop(struct fv_ThreadPool *tpool) +{ + struct fv_PoolReq *pool_req = NULL; + struct fv_PoolReqQueue *rq = tpool->req_queue; + + pthread_mutex_lock(&rq->lock); + + pool_req = g_queue_pop_head(&rq->queue); + + if (!g_queue_is_empty(&rq->queue)) { + pthread_cond_signal(&rq->notify); + } + pthread_mutex_unlock(&rq->lock); + + return pool_req; +} + +static void fv_pool_request_queue_destroy(struct fv_ThreadPool *tpool) +{ + struct fv_PoolReq *pool_req; + + while ((pool_req = fv_tpool_pop(tpool))) { + g_free(pool_req); + } + + /* Now free the actual queue itself */ + g_free(tpool->req_queue); +} + +/* + * Signal handler for blcking threads that wait on a remote lock to be released + * Called when virtiofsd does cleanup and wants to wake up these threads + */ +static void fv_thread_signal_handler(int signal) +{ + fuse_log(FUSE_LOG_DEBUG, "Thread received a signal.\n"); + return; +} + +static bool is_pool_stopping(struct fv_ThreadPool *tpool) +{ + bool destroy = false; + + pthread_mutex_lock(&tpool->tp_lock); + destroy = tpool->destroy_pool; + pthread_mutex_unlock(&tpool->tp_lock); + + return destroy; +} + +static void *fv_thread_do_work(void *thread) +{ + struct fv_PoolThread *worker = (struct fv_PoolThread *)thread; + struct fv_ThreadPool *tpool = worker->tpool; + struct fv_PoolReq *pool_request; + /* Actual worker function and arguments. Same as non locking requests */ + void (*worker_func)(void*, void*); + void *arg1; + void *arg2; + + while (1) { + if (is_pool_stopping(tpool)) { + break; + } + + /* + * Get the queue lock first so that we can wait on the conditional + * variable afterwards + */ + pthread_mutex_lock(&tpool->req_queue->lock); + + /* Wait on the condition variable until it is available */ + while (g_queue_is_empty(&tpool->req_queue->queue) && + !is_pool_stopping(tpool)) { + pthread_cond_wait(&tpool->req_queue->notify, + &tpool->req_queue->lock); + } + + /* Unlock the queue for other threads */ + pthread_mutex_unlock(&tpool->req_queue->lock); + + if (is_pool_stopping(tpool)) { + break; + } + + /* Now the request must be serviced */ + pool_request = fv_tpool_pop(tpool); + if (pool_request) { + fuse_log(FUSE_LOG_DEBUG, "%s: Locking Thread:%d handling" + " a request\n", __func__, worker->id); + worker_func = pool_request->worker_func; + arg1 = pool_request->arg1; + arg2 = pool_request->arg2; + worker_func(arg1, arg2); + g_free(pool_request); + } + } + + /* Mark the thread as inactive */ + pthread_mutex_lock(&tpool->tp_lock); + tpool->threads[worker->id]->alive = 0; + tpool->nr_running--; + pthread_mutex_unlock(&tpool->tp_lock); + + return NULL; +} + +/* Create a single thread that handles locking requests */ +static int fv_worker_thread_init(struct fv_ThreadPool *tpool, + struct fv_PoolThread **thread, int id) +{ + struct fv_PoolThread *worker; + int ret; + + worker = g_new(struct fv_PoolThread, 1); + worker->tpool = tpool; + worker->id = id; + worker->alive = 1; + + ret = pthread_create(&worker->pthread, NULL, fv_thread_do_work, + worker); + if (ret) { + fuse_log(FUSE_LOG_ERR, "pthread_create() failed with err=%d\n", ret); + g_free(worker); + return ret; + } + pthread_detach(worker->pthread); + *thread = worker; + return 0; +} + +static void send_signal_all(struct fv_ThreadPool *tpool) +{ + int i; + + pthread_mutex_lock(&tpool->tp_lock); + for (i = 0; i < tpool->num_threads; i++) { + if (tpool->threads[i]->alive) { + pthread_kill(tpool->threads[i]->pthread, SIGUSR1); + } + } + pthread_mutex_unlock(&tpool->tp_lock); +} + +static void do_pool_destroy(struct fv_ThreadPool *tpool, bool send_signal) +{ + int i, nr_running; + + /* We want to destroy the pool */ + pthread_mutex_lock(&tpool->tp_lock); + tpool->destroy_pool = 1; + pthread_mutex_unlock(&tpool->tp_lock); + + /* Wake up threads waiting for requests */ + pthread_mutex_lock(&tpool->req_queue->lock); + pthread_cond_broadcast(&tpool->req_queue->notify); + pthread_mutex_unlock(&tpool->req_queue->lock); + + /* Send Signal and wait for all threads to exit. */ + while (1) { + if (send_signal) { + send_signal_all(tpool); + } + pthread_mutex_lock(&tpool->tp_lock); + nr_running = tpool->nr_running; + pthread_mutex_unlock(&tpool->tp_lock); + if (!nr_running) { + break; + } + g_usleep(10000); + } + + /* Destroy the locking request queue */ + fv_pool_request_queue_destroy(tpool); + for (i = 0; i < tpool->num_threads; i++) { + g_free(tpool->threads[i]); + } + + /* Now free the threadpool */ + g_free(tpool->threads); + g_free(tpool); +} + +void fv_thread_pool_destroy(struct fv_ThreadPool *tpool) +{ + if (!tpool) { + return; + } + do_pool_destroy(tpool, true); +} + +static int register_sig_handler(void) +{ + struct sigaction sa; + sigemptyset(&sa.sa_mask); + sa.sa_flags = 0; + sa.sa_handler = fv_thread_signal_handler; + if (sigaction(SIGUSR1, &sa, NULL) == -1) { + fuse_log(FUSE_LOG_ERR, "Cannot register the signal handler:%s\n", + strerror(errno)); + return 1; + } + return 0; +} + +/* Initialize the thread pool for the locking posix threads */ +struct fv_ThreadPool *fv_thread_pool_init(unsigned int thread_num) +{ + struct fv_ThreadPool *tpool = NULL; + int i, ret; + + if (!thread_num) { + thread_num = 1; + } + + if (register_sig_handler()) { + return NULL; + } + tpool = g_new0(struct fv_ThreadPool, 1); + pthread_mutex_init(&(tpool->tp_lock), NULL); + + /* Initialize the Lock Request Queue */ + tpool->req_queue = fv_pool_request_queue_init(); + + /* Create the threads in the pool */ + tpool->threads = g_new(struct fv_PoolThread *, thread_num); + + for (i = 0; i < thread_num; i++) { + ret = fv_worker_thread_init(tpool, &tpool->threads[i], i); + if (ret) { + goto out_err; + } + tpool->num_threads++; + tpool->nr_running++; + } + + return tpool; +out_err: + /* An error occurred. Cleanup and return NULL */ + do_pool_destroy(tpool, false); + return NULL; +} diff --git a/tools/virtiofsd/tpool.h b/tools/virtiofsd/tpool.h new file mode 100644 index 0000000000..48d67e9a50 --- /dev/null +++ b/tools/virtiofsd/tpool.h @@ -0,0 +1,18 @@ +/* + * custom threadpool for virtiofsd + * + * Copyright (C) 2021 Red Hat, Inc. + * + * Authors: + * Ioannis Angelakopoulos + * Vivek Goyal + * + * SPDX-License-Identifier: GPL-2.0-or-later + */ + +struct fv_ThreadPool; + +struct fv_ThreadPool *fv_thread_pool_init(unsigned int thread_num); +void fv_thread_pool_destroy(struct fv_ThreadPool *tpool); +void fv_thread_pool_push(struct fv_ThreadPool *tpool, + void (*worker_func)(void *, void *), void *arg1, void *arg2); From patchwork Thu Sep 30 15:30:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vivek Goyal X-Patchwork-Id: 12528785 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CCBD2C433EF for ; Thu, 30 Sep 2021 15:57:49 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3604361507 for ; Thu, 30 Sep 2021 15:57:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 3604361507 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=nongnu.org Received: from localhost ([::1]:44958 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mVyRQ-0005H2-B4 for qemu-devel@archiver.kernel.org; Thu, 30 Sep 2021 11:57:48 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:36818) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mVy1a-00054I-MV for qemu-devel@nongnu.org; Thu, 30 Sep 2021 11:31:06 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:41616) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mVy1U-0002RW-I5 for qemu-devel@nongnu.org; Thu, 30 Sep 2021 11:31:06 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1633015859; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xXqnxPuS2B0pFLnq3FoVQDfZckLmo3zdQMJyndqZ/TM=; b=QbUS8Q3aqi+2I8bg/I0JMfqB7QZ4gs3YEz9uv6tUKgpsnZdT2JSA9EAkAlmc4K9SgMuRuk Ol1JLQe6RugH/R75srmvgsheS4PPDSM65+aRtdLJg80TftVGybzwFsQ6srrBNgFRpGjimj hf8YD7jPQxrbaTHyrxaP/JY9INm03Y8= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-339-c3KLzaCBOGe05gKjMxboPQ-1; Thu, 30 Sep 2021 11:30:58 -0400 X-MC-Unique: c3KLzaCBOGe05gKjMxboPQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0997DDF8A9; Thu, 30 Sep 2021 15:30:57 +0000 (UTC) Received: from horse.redhat.com (unknown [10.22.16.146]) by smtp.corp.redhat.com (Postfix) with ESMTP id D98001009962; Thu, 30 Sep 2021 15:30:56 +0000 (UTC) Received: by horse.redhat.com (Postfix, from userid 10451) id 139DA22828A; Thu, 30 Sep 2021 11:30:48 -0400 (EDT) From: Vivek Goyal To: qemu-devel@nongnu.org, virtio-fs@redhat.com, stefanha@redhat.com Subject: [PATCH 11/13] virtiofsd: Shutdown notification queue in the end Date: Thu, 30 Sep 2021 11:30:35 -0400 Message-Id: <20210930153037.1194279-12-vgoyal@redhat.com> In-Reply-To: <20210930153037.1194279-1-vgoyal@redhat.com> References: <20210930153037.1194279-1-vgoyal@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=vgoyal@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=216.205.24.124; envelope-from=vgoyal@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -23 X-Spam_score: -2.4 X-Spam_bar: -- X-Spam_report: (-2.4 / 5.0 requ) BAYES_00=-1.9, DKIM_INVALID=0.1, DKIM_SIGNED=0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jaggel@bu.edu, iangelak@redhat.com, dgilbert@redhat.com, vgoyal@redhat.com, miklos@szeredi.hu Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" So far we did not have the notion of cross queue traffic. That is, we get request on a queue and send back response on same queue. So if a request be being processed and at the same time a stop queue request comes in, we wait for all pending requests to finish and then queue is stopped and associated data structure cleaned. But with notification queue, now it is possible that we get a locking request on request queue and send the notification back on a different queue (notificaiton queue). This means, we need to make sure that notifiation queue has not already been shutdown or is not being shutdown in parallel while we are trying to send a notification back. Otherwise bad things are bound to happen. One way to solve this problem is that stop notification queue in the end. First stop hiprio and all request queues. That means by the time we are trying to stop notification queue, we know no other request can be in progress which can try to send something on notification queue. But problem is that currently we don't have any control on in what order queues should be stopped. If there was a notion of whole device being stopped, then we could decide in what order queues should be stopped. Stefan mentioned that there is a command to stop whole device VHOST_USER_SET_STATUS but it is not implemented in libvhost-user yet. Also we probably could not move away from per queue stop logic we have as of now. As an alternative, he said if we stop all queue when qidx 0 is being stopped, it should be fine and we can solve the issue of notification queue shutdown order. So in this patch I am shutting down all queues when queue 0 is being shutdown. And also changed shutdown order in such a way that notification queue is shutdown last. Suggested-by: Stefan Hajnoczi Signed-off-by: Vivek Goyal --- tools/virtiofsd/fuse_virtio.c | 27 ++++++++++++++++++++++++++- 1 file changed, 26 insertions(+), 1 deletion(-) diff --git a/tools/virtiofsd/fuse_virtio.c b/tools/virtiofsd/fuse_virtio.c index c67c2e0e7a..a87e88e286 100644 --- a/tools/virtiofsd/fuse_virtio.c +++ b/tools/virtiofsd/fuse_virtio.c @@ -826,6 +826,11 @@ static void fv_queue_cleanup_thread(struct fv_VuDev *vud, int qidx) assert(qidx < vud->nqueues); ourqi = vud->qi[qidx]; + /* Queue is already stopped */ + if (!ourqi) { + return; + } + /* qidx == 1 is the notification queue if notifications are enabled */ if (!se->notify_enabled || qidx != 1) { /* Kill the thread */ @@ -847,14 +852,25 @@ static void fv_queue_cleanup_thread(struct fv_VuDev *vud, int qidx) static void stop_all_queues(struct fv_VuDev *vud) { + struct fuse_session *se = vud->se; + for (int i = 0; i < vud->nqueues; i++) { if (!vud->qi[i]) { continue; } + /* Shutdown notification queue in the end */ + if (se->notify_enabled && i == 1) { + continue; + } fuse_log(FUSE_LOG_INFO, "%s: Stopping queue %d thread\n", __func__, i); fv_queue_cleanup_thread(vud, i); } + + if (se->notify_enabled) { + fuse_log(FUSE_LOG_INFO, "%s: Stopping queue %d thread\n", __func__, 1); + fv_queue_cleanup_thread(vud, 1); + } } /* Callback from libvhost-user on start or stop of a queue */ @@ -934,7 +950,16 @@ static void fv_queue_set_started(VuDev *dev, int qidx, bool started) * the queue thread doesn't block in virtio_send_msg(). */ vu_dispatch_unlock(vud); - fv_queue_cleanup_thread(vud, qidx); + + /* + * If queue 0 is being shutdown, treat it as if device is being + * shutdown and stop all queues. + */ + if (qidx == 0) { + stop_all_queues(vud); + } else { + fv_queue_cleanup_thread(vud, qidx); + } vu_dispatch_wrlock(vud); } } From patchwork Thu Sep 30 15:30:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vivek Goyal X-Patchwork-Id: 12528751 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 707ABC433EF for ; Thu, 30 Sep 2021 15:45:59 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C6F7A613A7 for ; Thu, 30 Sep 2021 15:45:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org C6F7A613A7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=nongnu.org Received: from localhost ([::1]:46428 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mVyFx-0003v4-TU for qemu-devel@archiver.kernel.org; Thu, 30 Sep 2021 11:45:57 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:37080) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mVy2H-0005uq-G6 for qemu-devel@nongnu.org; Thu, 30 Sep 2021 11:31:49 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.151.124]:53972) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mVy2E-00030X-2B for qemu-devel@nongnu.org; Thu, 30 Sep 2021 11:31:49 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1633015904; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=aJtNFeqiQ1kOmOZHX97FpByGq70o7cDl9sNER1MSarc=; b=dDwSBFBK+PQJl+trWoImaTgg1tYTOh2LTzv7Etif0NnDS+XVIPYSsrEyEivKQWxyJfFhZu gO4U4WqKHlaUDXCyNTreHLe/K4PBbzYCrARMT5bVPPNWIIs6cR1SK/83lSS1u5SK9RJo4D qnsAa20YaeYr5+hzrxRfjKN6ux8K5oM= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-2-NCmXo8AFP1iSKDavSAec3w-1; Thu, 30 Sep 2021 11:31:39 -0400 X-MC-Unique: NCmXo8AFP1iSKDavSAec3w-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 7882110168C8; Thu, 30 Sep 2021 15:31:38 +0000 (UTC) Received: from horse.redhat.com (unknown [10.22.16.146]) by smtp.corp.redhat.com (Postfix) with ESMTP id 08E9660CC9; Thu, 30 Sep 2021 15:30:57 +0000 (UTC) Received: by horse.redhat.com (Postfix, from userid 10451) id 1E49822828B; Thu, 30 Sep 2021 11:30:48 -0400 (EDT) From: Vivek Goyal To: qemu-devel@nongnu.org, virtio-fs@redhat.com, stefanha@redhat.com Subject: [PATCH 12/13] virtiofsd: Implement blocking posix locks Date: Thu, 30 Sep 2021 11:30:36 -0400 Message-Id: <20210930153037.1194279-13-vgoyal@redhat.com> In-Reply-To: <20210930153037.1194279-1-vgoyal@redhat.com> References: <20210930153037.1194279-1-vgoyal@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=vgoyal@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=170.10.151.124; envelope-from=vgoyal@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jaggel@bu.edu, iangelak@redhat.com, dgilbert@redhat.com, vgoyal@redhat.com, miklos@szeredi.hu Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" As of now we don't support fcntl(F_SETLKW) and if we see one, we return -EOPNOTSUPP. Change that by accepting these requests and returning a reply immediately asking caller to wait. Once lock is available, send a notification to the waiter indicating lock is available. In response to lock request, we are returning error value as "1", which signals to client to queue the lock request internally and later client will get a notification which will signal lock is taken (or error). And then fuse client should wake up the guest process. Signed-off-by: Vivek Goyal Signed-off-by: Ioannis Angelakopoulos --- tools/virtiofsd/fuse_lowlevel.c | 37 ++++++++++++++++- tools/virtiofsd/fuse_lowlevel.h | 26 ++++++++++++ tools/virtiofsd/fuse_virtio.c | 50 ++++++++++++++++++++--- tools/virtiofsd/passthrough_ll.c | 70 ++++++++++++++++++++++++++++---- 4 files changed, 167 insertions(+), 16 deletions(-) diff --git a/tools/virtiofsd/fuse_lowlevel.c b/tools/virtiofsd/fuse_lowlevel.c index e4679c73ab..2e7f4b786d 100644 --- a/tools/virtiofsd/fuse_lowlevel.c +++ b/tools/virtiofsd/fuse_lowlevel.c @@ -179,8 +179,8 @@ int fuse_send_reply_iov_nofree(fuse_req_t req, int error, struct iovec *iov, .unique = req->unique, .error = error, }; - - if (error <= -1000 || error > 0) { + /* error = 1 has been used to signal client to wait for notificaiton */ + if (error <= -1000 || error > 1) { fuse_log(FUSE_LOG_ERR, "fuse: bad error value: %i\n", error); out.error = -ERANGE; } @@ -290,6 +290,11 @@ int fuse_reply_err(fuse_req_t req, int err) return send_reply(req, -err, NULL, 0); } +int fuse_reply_wait(fuse_req_t req) +{ + return send_reply(req, 1, NULL, 0); +} + void fuse_reply_none(fuse_req_t req) { fuse_free_req(req); @@ -2165,6 +2170,34 @@ static void do_destroy(fuse_req_t req, fuse_ino_t nodeid, send_reply_ok(req, NULL, 0); } +static int send_notify_iov(struct fuse_session *se, int notify_code, + struct iovec *iov, int count) +{ + struct fuse_out_header out; + if (!se->got_init) { + return -ENOTCONN; + } + out.unique = 0; + out.error = notify_code; + iov[0].iov_base = &out; + iov[0].iov_len = sizeof(struct fuse_out_header); + return fuse_send_msg(se, NULL, iov, count); +} + +int fuse_lowlevel_notify_lock(struct fuse_session *se, uint64_t unique, + int32_t error) +{ + struct fuse_notify_lock_out outarg = {0}; + struct iovec iov[2]; + + outarg.unique = unique; + outarg.error = -error; + + iov[1].iov_base = &outarg; + iov[1].iov_len = sizeof(outarg); + return send_notify_iov(se, FUSE_NOTIFY_LOCK, iov, 2); +} + int fuse_lowlevel_notify_store(struct fuse_session *se, fuse_ino_t ino, off_t offset, struct fuse_bufvec *bufv) { diff --git a/tools/virtiofsd/fuse_lowlevel.h b/tools/virtiofsd/fuse_lowlevel.h index c55c0ca2fc..64624b48dc 100644 --- a/tools/virtiofsd/fuse_lowlevel.h +++ b/tools/virtiofsd/fuse_lowlevel.h @@ -1251,6 +1251,22 @@ struct fuse_lowlevel_ops { */ int fuse_reply_err(fuse_req_t req, int err); +/** + * Ask caller to wait for lock. + * + * Possible requests: + * setlkw + * + * If caller sends a blocking lock request (setlkw), then reply to caller + * that wait for lock to be available. Once lock is available caller will + * receive a notification with request's unique id. Notification will + * carry info whether lock was successfully obtained or not. + * + * @param req request handle + * @return zero for success, -errno for failure to send reply + */ +int fuse_reply_wait(fuse_req_t req); + /** * Don't send reply * @@ -1685,6 +1701,16 @@ int fuse_lowlevel_notify_delete(struct fuse_session *se, fuse_ino_t parent, int fuse_lowlevel_notify_store(struct fuse_session *se, fuse_ino_t ino, off_t offset, struct fuse_bufvec *bufv); +/** + * Notify event related to previous lock request + * + * @param se the session object + * @param unique the unique id of the request which requested setlkw + * @param error zero for success, -errno for the failure + */ +int fuse_lowlevel_notify_lock(struct fuse_session *se, uint64_t unique, + int32_t error); + /* * Utility functions */ diff --git a/tools/virtiofsd/fuse_virtio.c b/tools/virtiofsd/fuse_virtio.c index a87e88e286..bb2d4456fc 100644 --- a/tools/virtiofsd/fuse_virtio.c +++ b/tools/virtiofsd/fuse_virtio.c @@ -273,6 +273,23 @@ static void vq_send_element(struct fv_QueueInfo *qi, VuVirtqElement *elem, vu_dispatch_unlock(qi->virtio_dev); } +/* Returns NULL if queue is empty */ +static FVRequest *vq_pop_notify_elem(struct fv_QueueInfo *qi) +{ + struct fuse_session *se = qi->virtio_dev->se; + VuDev *dev = &se->virtio_dev->dev; + VuVirtq *q = vu_get_queue(dev, qi->qidx); + FVRequest *req; + + vu_dispatch_rdlock(qi->virtio_dev); + pthread_mutex_lock(&qi->vq_lock); + /* Pop an element from queue */ + req = vu_queue_pop(dev, q, sizeof(FVRequest)); + pthread_mutex_unlock(&qi->vq_lock); + vu_dispatch_unlock(qi->virtio_dev); + return req; +} + /* * Called back by ll whenever it wants to send a reply/message back * The 1st element of the iov starts with the fuse_out_header @@ -281,9 +298,9 @@ static void vq_send_element(struct fv_QueueInfo *qi, VuVirtqElement *elem, int virtio_send_msg(struct fuse_session *se, struct fuse_chan *ch, struct iovec *iov, int count) { - FVRequest *req = container_of(ch, FVRequest, ch); - struct fv_QueueInfo *qi = ch->qi; - VuVirtqElement *elem = &req->elem; + FVRequest *req; + struct fv_QueueInfo *qi; + VuVirtqElement *elem; int ret = 0; assert(count >= 1); @@ -294,8 +311,30 @@ int virtio_send_msg(struct fuse_session *se, struct fuse_chan *ch, size_t tosend_len = iov_size(iov, count); - /* unique == 0 is notification, which we don't support */ - assert(out->unique); + /* unique == 0 is notification */ + if (!out->unique) { + if (!se->notify_enabled) { + return -EOPNOTSUPP; + } + /* If notifications are enabled, queue index 1 is notification queue */ + qi = se->virtio_dev->qi[1]; + req = vq_pop_notify_elem(qi); + if (!req) { + /* + * TODO: Implement some sort of ring buffer and queue notifications + * on that and send these later when notification queue has space + * available. + */ + return -ENOSPC; + } + req->reply_sent = false; + } else { + assert(ch); + req = container_of(ch, FVRequest, ch); + qi = ch->qi; + } + + elem = &req->elem; assert(!req->reply_sent); /* The 'in' part of the elem is to qemu */ @@ -985,6 +1024,7 @@ static int fv_get_config(VuDev *dev, uint8_t *config, uint32_t len) struct fuse_notify_delete_out delete_out; struct fuse_notify_store_out store_out; struct fuse_notify_retrieve_out retrieve_out; + struct fuse_notify_lock_out lock_out; }; notify_size = sizeof(struct fuse_out_header) + diff --git a/tools/virtiofsd/passthrough_ll.c b/tools/virtiofsd/passthrough_ll.c index 6928662e22..277f74762b 100644 --- a/tools/virtiofsd/passthrough_ll.c +++ b/tools/virtiofsd/passthrough_ll.c @@ -2131,13 +2131,35 @@ out: } } +static void setlk_send_notification(struct fuse_session *se, uint64_t unique, + int saverr) +{ + int ret; + + do { + ret = fuse_lowlevel_notify_lock(se, unique, saverr); + /* + * Retry sending notification if notification queue does not have + * free descriptor yet, otherwise break out of loop. Either we + * successfully sent notifiation or some other error occurred. + */ + if (ret != -ENOSPC) { + break; + } + usleep(10000); + } while (1); +} + static void lo_setlk(fuse_req_t req, fuse_ino_t ino, struct fuse_file_info *fi, struct flock *lock, int sleep) { struct lo_data *lo = lo_data(req); struct lo_inode *inode; struct lo_inode_plock *plock; - int ret, saverr = 0; + int ret, saverr = 0, ofd; + uint64_t unique; + struct fuse_session *se = req->se; + bool blocking_lock = false; fuse_log(FUSE_LOG_DEBUG, "lo_setlk(ino=%" PRIu64 ", flags=%d)" @@ -2151,11 +2173,6 @@ static void lo_setlk(fuse_req_t req, fuse_ino_t ino, struct fuse_file_info *fi, return; } - if (sleep) { - fuse_reply_err(req, EOPNOTSUPP); - return; - } - inode = lo_inode(req, ino); if (!inode) { fuse_reply_err(req, EBADF); @@ -2168,21 +2185,56 @@ static void lo_setlk(fuse_req_t req, fuse_ino_t ino, struct fuse_file_info *fi, if (!plock) { saverr = ret; + pthread_mutex_unlock(&inode->plock_mutex); goto out; } + /* + * plock is now released when inode is going away. We already have + * a reference on inode, so it is guaranteed that plock->fd is + * still around even after dropping inode->plock_mutex lock + */ + ofd = plock->fd; + pthread_mutex_unlock(&inode->plock_mutex); + + /* + * If this lock request can block, request caller to wait for + * notification. Do not access req after this. Once lock is + * available, send a notification instead. + */ + if (sleep && lock->l_type != F_UNLCK) { + /* + * If notification queue is not enabled, can't support async + * locks. + */ + if (!se->notify_enabled) { + saverr = EOPNOTSUPP; + goto out; + } + blocking_lock = true; + unique = req->unique; + fuse_reply_wait(req); + } + /* TODO: Is it alright to modify flock? */ lock->l_pid = 0; - ret = fcntl(plock->fd, F_OFD_SETLK, lock); + if (blocking_lock) { + ret = fcntl(ofd, F_OFD_SETLKW, lock); + } else { + ret = fcntl(ofd, F_OFD_SETLK, lock); + } if (ret == -1) { saverr = errno; } out: - pthread_mutex_unlock(&inode->plock_mutex); lo_inode_put(lo, &inode); - fuse_reply_err(req, saverr); + if (!blocking_lock) { + fuse_reply_err(req, saverr); + } else { + setlk_send_notification(se, unique, saverr); + } } static void lo_fsyncdir(fuse_req_t req, fuse_ino_t ino, int datasync, From patchwork Thu Sep 30 15:30:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vivek Goyal X-Patchwork-Id: 12528763 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A07B5C433EF for ; Thu, 30 Sep 2021 15:53:33 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 48DFE60F5B for ; Thu, 30 Sep 2021 15:53:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 48DFE60F5B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=nongnu.org Received: from localhost ([::1]:37618 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mVyNI-0000DU-EV for qemu-devel@archiver.kernel.org; Thu, 30 Sep 2021 11:53:32 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:36744) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mVy1X-0004vO-7P for qemu-devel@nongnu.org; Thu, 30 Sep 2021 11:31:03 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:33137) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mVy1U-0002Re-Kr for qemu-devel@nongnu.org; Thu, 30 Sep 2021 11:31:02 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1633015859; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CODMsCUzNQMLC3NXwVMjT48tgDBPtbHcvq37MbxCdD4=; b=VVjWQfe14yyuKrqG9ihxj/an4Kpn2xmwTdQWB1MTGpD4GTmjrENmnLjmhj3s93f6h1a+Cp vYGKcIXWZhuBdDj6daVkmgLutpmcaIr04zsKljb8td9r91NWWgPrFI0CtdFFGdGPT+udXc e8Fs/pzM0ZDNUCBgjzYwGCN9bWP+Gmo= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-339-8rMh0fOEONi2YbPwWW1o2A-1; Thu, 30 Sep 2021 11:30:58 -0400 X-MC-Unique: 8rMh0fOEONi2YbPwWW1o2A-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 247F3DF8AA; Thu, 30 Sep 2021 15:30:57 +0000 (UTC) Received: from horse.redhat.com (unknown [10.22.16.146]) by smtp.corp.redhat.com (Postfix) with ESMTP id 045545C1D0; Thu, 30 Sep 2021 15:30:57 +0000 (UTC) Received: by horse.redhat.com (Postfix, from userid 10451) id 2716422828C; Thu, 30 Sep 2021 11:30:48 -0400 (EDT) From: Vivek Goyal To: qemu-devel@nongnu.org, virtio-fs@redhat.com, stefanha@redhat.com Subject: [PATCH 13/13] virtiofsd, seccomp: Add clock_nanosleep() to allow list Date: Thu, 30 Sep 2021 11:30:37 -0400 Message-Id: <20210930153037.1194279-14-vgoyal@redhat.com> In-Reply-To: <20210930153037.1194279-1-vgoyal@redhat.com> References: <20210930153037.1194279-1-vgoyal@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=vgoyal@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=216.205.24.124; envelope-from=vgoyal@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -8 X-Spam_score: -0.9 X-Spam_bar: / X-Spam_report: (-0.9 / 5.0 requ) DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jaggel@bu.edu, iangelak@redhat.com, dgilbert@redhat.com, vgoyal@redhat.com, miklos@szeredi.hu Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" g_usleep() calls nanosleep() and that now seems to call clock_nanosleep() syscall. Now these patches are making use of g_usleep(). So add clock_nanosleep() to list of allowed syscalls. Signed-off-by: Vivek Goyal --- tools/virtiofsd/passthrough_seccomp.c | 1 + 1 file changed, 1 insertion(+) diff --git a/tools/virtiofsd/passthrough_seccomp.c b/tools/virtiofsd/passthrough_seccomp.c index cd24b40b78..03080806c0 100644 --- a/tools/virtiofsd/passthrough_seccomp.c +++ b/tools/virtiofsd/passthrough_seccomp.c @@ -117,6 +117,7 @@ static const int syscall_allowlist[] = { SCMP_SYS(writev), SCMP_SYS(umask), SCMP_SYS(nanosleep), + SCMP_SYS(clock_nanosleep), }; /* Syscalls used when --syslog is enabled */