From patchwork Fri Jan 15 09:26:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12021987 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86EEEC433E9 for ; Fri, 15 Jan 2021 09:28:59 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3CFD123434 for ; Fri, 15 Jan 2021 09:28:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3CFD123434 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=eKS/BMfHw/y368oNeFeFl5stn5+AXYQOs2UDXJyuaZg=; b=YNNsZ1QNT4jevVn+EHtFSWs7q zcOqpJ9w69YhxcoIx+58fr9dCfIuoXHNYYh2OlRy8JL7Ug0uoaH+g4CDAcg9J0Akxgl6hVeeZwbKf Ci9Ed2xghUSe7d4TBJNCwV6UKVPhwnFCVbQ1ARShEBvC067z7Gx9+30VJfcf/cGz/5Ngil0qW0bh2 PkT8slWpO+ThmyJMXUuiOwaImrQv6DUrjMcLAbIhBVYtLg8s0/OLsWHpnfz3ptpcR5cp7nu/q8uKV B20eD+V731spFhd6Ot1MHzS9TMg2RGePblshKFQhT9eNQOzvFcLSJH1dQO1pv2Dmnpej1e6mfqBf0 FXv+q4WjA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l0LOJ-0002P9-1w; Fri, 15 Jan 2021 09:27:35 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l0LO9-0002Gx-Du for linux-arm-kernel@lists.infradead.org; Fri, 15 Jan 2021 09:27:26 +0000 Received: from DGGEMS411-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DHG4z6Q6rzMKkp; Fri, 15 Jan 2021 17:25:55 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.184.42) by DGGEMS411-HUB.china.huawei.com (10.3.19.211) with Microsoft SMTP Server id 14.3.498.0; Fri, 15 Jan 2021 17:27:06 +0800 From: Keqian Zhu To: , , , , , Alex Williamson , Kirti Wankhede , Cornelia Huck , Will Deacon , Marc Zyngier , Catalin Marinas Subject: [PATCH v2 1/2] vfio/iommu_type1: Populate full dirty when detach non-pinned group Date: Fri, 15 Jan 2021 17:26:42 +0800 Message-ID: <20210115092643.728-2-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210115092643.728-1-zhukeqian1@huawei.com> References: <20210115092643.728-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.184.42] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210115_042725_772073_B1F4733F X-CRM114-Status: GOOD ( 14.68 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , jiangkunkun@huawei.com, Suzuki K Poulose , Joerg Roedel , Daniel Lezcano , Alexios Zavras , James Morse , wanghaibin.wang@huawei.com, Thomas Gleixner , Robin Murphy , Andrew Morton , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org If a group with non-pinned-page dirty scope is detached with dirty logging enabled, we should fully populate the dirty bitmaps at the time it's removed since we don't know the extent of its previous DMA, nor will the group be present to trigger the full bitmap when the user retrieves the dirty bitmap. Fixes: d6a4c185660c ("vfio iommu: Implementation of ioctl for dirty pages tracking") Suggested-by: Alex Williamson Signed-off-by: Keqian Zhu --- drivers/vfio/vfio_iommu_type1.c | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 0b4dedaa9128..4e82b9a3440f 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -236,6 +236,19 @@ static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize) } } +static void vfio_iommu_populate_bitmap_full(struct vfio_iommu *iommu) +{ + struct rb_node *n; + unsigned long pgshift = __ffs(iommu->pgsize_bitmap); + + for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) { + struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node); + + if (dma->iommu_mapped) + bitmap_set(dma->bitmap, 0, dma->size >> pgshift); + } +} + static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu, size_t pgsize) { struct rb_node *n; @@ -2415,8 +2428,11 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, * Removal of a group without dirty tracking may allow the iommu scope * to be promoted. */ - if (update_dirty_scope) + if (update_dirty_scope) { update_pinned_page_dirty_scope(iommu); + if (iommu->dirty_page_tracking) + vfio_iommu_populate_bitmap_full(iommu); + } mutex_unlock(&iommu->lock); }