From patchwork Thu Jan 28 15:17:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12053945 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACF92C433E0 for ; Thu, 28 Jan 2021 15:21:11 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4811D64DD6 for ; Thu, 28 Jan 2021 15:21:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4811D64DD6 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=HNetYdmLgO3uKckzSKBhM3Y9k46u/qtVwCr/zpbzwik=; b=3L82wY3fwEVeuIfwZMn4HyyOT Jwtpdtw/09/7Dkc9RaxELohoKruhMob6aRI6S2+grM7yeZ9hrKHBylaHbDVuKfsbg6CP7Qsu8Imcy oYfHWhpvrIEZM7I+sO1gKH4X7sZ82j6hVxtbWNc9Odimmnb1sDQhZRyx5zNDWFEr6jHqN6bKi0Ye7 il+cjJbEfNpcjXDJd4ERxJaHySPuTXfLpNdEOkMxrToOS3e7cLiyDlhtRCOp/jTxdIrVEJOQJYXFy ye2zcSsRyrTVuHDz65/URpP011BBlS/NnSKvcfSCbzQ63m6SZyl5PNFdKXRHvHzWFn/tbY9edSDEH 16pqkHgzw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l5954-0004U6-KP; Thu, 28 Jan 2021 15:19:35 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l593n-00040t-1q for linux-arm-kernel@lists.infradead.org; Thu, 28 Jan 2021 15:18:26 +0000 Received: from DGGEMS413-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4DRPG06S7Dz7btQ; Thu, 28 Jan 2021 23:16:56 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.184.42) by DGGEMS413-HUB.china.huawei.com (10.3.19.213) with Microsoft SMTP Server id 14.3.498.0; Thu, 28 Jan 2021 23:18:02 +0800 From: Keqian Zhu To: , , , , , Will Deacon , "Alex Williamson" , Marc Zyngier , Catalin Marinas Subject: [RFC PATCH 10/11] vfio/iommu_type1: Optimize dirty bitmap population based on iommu HWDBM Date: Thu, 28 Jan 2021 23:17:41 +0800 Message-ID: <20210128151742.18840-11-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210128151742.18840-1-zhukeqian1@huawei.com> References: <20210128151742.18840-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.184.42] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210128_101815_634428_6D07BBF7 X-CRM114-Status: GOOD ( 15.97 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , jiangkunkun@huawei.com, Suzuki K Poulose , Cornelia Huck , lushenming@huawei.com, Kirti Wankhede , James Morse , yuzenghui@huawei.com, wanghaibin.wang@huawei.com, Robin Murphy Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: jiangkunkun In the past if vfio_iommu is not of pinned_page_dirty_scope and vfio_dma is iommu_mapped, we populate full dirty bitmap for this vfio_dma. Now we can try to get dirty log from iommu before make the lousy decision. Co-developed-by: Keqian Zhu Signed-off-by: Kunkun Jiang --- drivers/vfio/vfio_iommu_type1.c | 97 ++++++++++++++++++++++++++++++++- 1 file changed, 94 insertions(+), 3 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 3b8522ebf955..1cd10f3e7ed4 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -999,6 +999,25 @@ static bool vfio_group_supports_hwdbm(struct vfio_group *group) return true; } +static int vfio_iommu_dirty_log_clear(struct vfio_iommu *iommu, + dma_addr_t start_iova, size_t size, + unsigned long *bitmap_buffer, + dma_addr_t base_iova, size_t pgsize) +{ + struct vfio_domain *d; + unsigned long pgshift = __ffs(pgsize); + int ret; + + list_for_each_entry(d, &iommu->domain_list, next) { + ret = iommu_clear_dirty_log(d->domain, start_iova, size, + bitmap_buffer, base_iova, pgshift); + if (ret) + return ret; + } + + return 0; +} + static int update_user_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, struct vfio_dma *dma, dma_addr_t base_iova, size_t pgsize) @@ -1010,13 +1029,28 @@ static int update_user_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, unsigned long shift = bit_offset % BITS_PER_LONG; unsigned long leftover; + if (iommu->pinned_page_dirty_scope || !dma->iommu_mapped) + goto bitmap_done; + + /* try to get dirty log from IOMMU */ + if (!iommu->num_non_hwdbm_groups) { + struct vfio_domain *d; + + list_for_each_entry(d, &iommu->domain_list, next) { + if (iommu_sync_dirty_log(d->domain, dma->iova, dma->size, + dma->bitmap, dma->iova, pgshift)) + return -EFAULT; + } + goto bitmap_done; + } + /* * mark all pages dirty if any IOMMU capable device is not able * to report dirty pages and all pages are pinned and mapped. */ - if (!iommu->pinned_page_dirty_scope && dma->iommu_mapped) - bitmap_set(dma->bitmap, 0, nbits); + bitmap_set(dma->bitmap, 0, nbits); +bitmap_done: if (shift) { bitmap_shift_left(dma->bitmap, dma->bitmap, shift, nbits + shift); @@ -1078,6 +1112,18 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, */ bitmap_clear(dma->bitmap, 0, dma->size >> pgshift); vfio_dma_populate_bitmap(dma, pgsize); + + /* Clear iommu dirty log to re-enable dirty log tracking */ + if (!iommu->pinned_page_dirty_scope && + dma->iommu_mapped && !iommu->num_non_hwdbm_groups) { + ret = vfio_iommu_dirty_log_clear(iommu, dma->iova, + dma->size, dma->bitmap, dma->iova, + pgsize); + if (ret) { + pr_warn("dma dirty log clear failed!\n"); + return ret; + } + } } return 0; } @@ -2780,6 +2826,48 @@ static int vfio_iommu_type1_unmap_dma(struct vfio_iommu *iommu, -EFAULT : 0; } +static void vfio_dma_dirty_log_start(struct vfio_iommu *iommu, + struct vfio_dma *dma) +{ + struct vfio_domain *d; + + list_for_each_entry(d, &iommu->domain_list, next) { + /* Go through all domain anyway even if we fail */ + iommu_split_block(d->domain, dma->iova, dma->size); + } +} + +static void vfio_dma_dirty_log_stop(struct vfio_iommu *iommu, + struct vfio_dma *dma) +{ + struct vfio_domain *d; + + list_for_each_entry(d, &iommu->domain_list, next) { + /* Go through all domain anyway even if we fail */ + iommu_merge_page(d->domain, dma->iova, dma->size, + d->prot | dma->prot); + } +} + +static void vfio_iommu_dirty_log_switch(struct vfio_iommu *iommu, bool start) +{ + struct rb_node *n; + + /* Split and merge even if all iommu don't support HWDBM now */ + for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) { + struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node); + + if (!dma->iommu_mapped) + continue; + + /* Go through all dma range anyway even if we fail */ + if (start) + vfio_dma_dirty_log_start(iommu, dma); + else + vfio_dma_dirty_log_stop(iommu, dma); + } +} + static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu, unsigned long arg) { @@ -2812,8 +2900,10 @@ static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu, pgsize = 1 << __ffs(iommu->pgsize_bitmap); if (!iommu->dirty_page_tracking) { ret = vfio_dma_bitmap_alloc_all(iommu, pgsize); - if (!ret) + if (!ret) { iommu->dirty_page_tracking = true; + vfio_iommu_dirty_log_switch(iommu, true); + } } mutex_unlock(&iommu->lock); return ret; @@ -2822,6 +2912,7 @@ static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu, if (iommu->dirty_page_tracking) { iommu->dirty_page_tracking = false; vfio_dma_bitmap_free_all(iommu); + vfio_iommu_dirty_log_switch(iommu, false); } mutex_unlock(&iommu->lock); return 0;