From patchwork Mon Feb 28 00:50:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolu Lu X-Patchwork-Id: 12762262 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1E38C433EF for ; Mon, 28 Feb 2022 00:53:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231416AbiB1Ayc (ORCPT ); Sun, 27 Feb 2022 19:54:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232331AbiB1Aya (ORCPT ); Sun, 27 Feb 2022 19:54:30 -0500 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0BA2244748; Sun, 27 Feb 2022 16:53:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646009620; x=1677545620; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=B6DZ1RO4GlEJ+ifPKGTyt+c1qmLg6YGGFsnfqk3oePY=; b=bm3h/dasXqAIcK3rYpQ6J5Rx2uNXffIK2XztBzNLNVPkK7NjjGdOQlyh C+P1VrWZOycFO1qaYDsmkCGOaLBHsTFzyi86dF/cSEYdAy2e2/UMOHkkX 7tvSTJ6zCDUwa80iU6TDPFEf9KyVPdoSTxSCpvxkrUfBzfGuarqR1nN+Q 3gG4WpgQP049oN/ZsW/qrY72x6tH5Dr0McMCnOo5PRDuKgjRRK5gdb3Xa AbwTVhHRvRn7YhCe8laestA5gIPbUkLv4FSQNhNq78NurjNl5g0jhyb7E seABYi3/5uYcx8wF4cHB1KALls6JN47yuWBE5yvAZ5IjmTfkOcEbqY0zh g==; X-IronPort-AV: E=McAfee;i="6200,9189,10271"; a="240184727" X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="240184727" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2022 16:53:39 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,142,1643702400"; d="scan'208";a="550020301" Received: from allen-box.sh.intel.com ([10.239.159.118]) by orsmga008.jf.intel.com with ESMTP; 27 Feb 2022 16:53:32 -0800 From: Lu Baolu To: Greg Kroah-Hartman , Joerg Roedel , Alex Williamson , Bjorn Helgaas , Jason Gunthorpe , Christoph Hellwig , Kevin Tian , Ashok Raj Cc: Will Deacon , Robin Murphy , Dan Williams , rafael@kernel.org, Diana Craciun , Cornelia Huck , Eric Auger , Liu Yi L , Jacob jun Pan , Chaitanya Kulkarni , Stuart Yoder , Laurentiu Tudor , Thierry Reding , David Airlie , Daniel Vetter , Jonathan Hunter , Li Yang , Dmitry Osipenko , iommu@lists.linux-foundation.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Lu Baolu Subject: [PATCH v7 08/11] vfio: Remove use of vfio_group_viable() Date: Mon, 28 Feb 2022 08:50:53 +0800 Message-Id: <20220228005056.599595-9-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220228005056.599595-1-baolu.lu@linux.intel.com> References: <20220228005056.599595-1-baolu.lu@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org As DMA ownership is claimed for the iommu group when a VFIO group is added to a VFIO container, the VFIO group viability is guaranteed as long as group->container_users > 0. Remove those unnecessary group viability checks which are only hit when group->container_users is not zero. The only remaining reference is in GROUP_GET_STATUS, which could be called at any time when group fd is valid. Here we just replace the vfio_group_viable() by directly calling IOMMU core to get viability status. Signed-off-by: Lu Baolu Reviewed-by: Jason Gunthorpe Acked-by: Alex Williamson --- drivers/vfio/vfio.c | 18 ++++++------------ 1 file changed, 6 insertions(+), 12 deletions(-) diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c index df9d4b60e5ae..73034446e03f 100644 --- a/drivers/vfio/vfio.c +++ b/drivers/vfio/vfio.c @@ -1313,12 +1313,6 @@ static int vfio_group_set_container(struct vfio_group *group, int container_fd) return ret; } -static bool vfio_group_viable(struct vfio_group *group) -{ - return (iommu_group_for_each_dev(group->iommu_group, - group, vfio_dev_viable) == 0); -} - static int vfio_group_add_container_user(struct vfio_group *group) { if (!atomic_inc_not_zero(&group->container_users)) @@ -1328,7 +1322,7 @@ static int vfio_group_add_container_user(struct vfio_group *group) atomic_dec(&group->container_users); return -EPERM; } - if (!group->container->iommu_driver || !vfio_group_viable(group)) { + if (!group->container->iommu_driver) { atomic_dec(&group->container_users); return -EINVAL; } @@ -1346,7 +1340,7 @@ static int vfio_group_get_device_fd(struct vfio_group *group, char *buf) int ret = 0; if (0 == atomic_read(&group->container_users) || - !group->container->iommu_driver || !vfio_group_viable(group)) + !group->container->iommu_driver) return -EINVAL; if (group->type == VFIO_NO_IOMMU && !capable(CAP_SYS_RAWIO)) @@ -1438,11 +1432,11 @@ static long vfio_group_fops_unl_ioctl(struct file *filep, status.flags = 0; - if (vfio_group_viable(group)) - status.flags |= VFIO_GROUP_FLAGS_VIABLE; - if (group->container) - status.flags |= VFIO_GROUP_FLAGS_CONTAINER_SET; + status.flags |= VFIO_GROUP_FLAGS_CONTAINER_SET | + VFIO_GROUP_FLAGS_VIABLE; + else if (!iommu_group_dma_owner_claimed(group->iommu_group)) + status.flags |= VFIO_GROUP_FLAGS_VIABLE; if (copy_to_user((void __user *)arg, &status, minsz)) return -EFAULT;