From patchwork Thu Dec 10 07:34:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11963565 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A7C2C4361B for ; Thu, 10 Dec 2020 07:37:19 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CEBDC20727 for ; Thu, 10 Dec 2020 07:37:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CEBDC20727 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=TD1RR+p3af1tFQFlMFF4k7zhS2HcFa23lYGGI5IR+VI=; b=0v1ZCjV/RS6yNC55XFby3eY3z 6JfX2uF6j8hitEsuGepWL4QdndY0x/zELfa9JUZNa9mT/A+REs/Tb6q8wdUtsNtzaMv3kS293478+ 3pVQnyZGLsMQeTkiINiieKuyNfKK9cgjZZ/3Sd/U1PIGMlF9Qxhg4yeDy8Oym2ePZcQdmO/ewyUBK CPSw7hsIl3uU1EOyVWCE+suc83VjgcjYYQA8E/mNMed/Wr6rdj2tcN5wMXNCTbG2Z1OmSwbBXZqcT jOGJ7iwuZWGfsnBaaJ+eRPsq4bPEs263uMMxkChtQzZWXbJdLdcjhHA7sB6tcNblR6NYFDpyzSqM0 0ji1au1uw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1knGUj-0007VP-Pp; Thu, 10 Dec 2020 07:36:10 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1knGUM-0007B0-H2 for linux-arm-kernel@lists.infradead.org; Thu, 10 Dec 2020 07:35:49 +0000 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4Cs5Kg6N1tz7CCB; Thu, 10 Dec 2020 15:35:03 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.37) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.487.0; Thu, 10 Dec 2020 15:35:25 +0800 From: Keqian Zhu To: , , , , , Alex Williamson , Cornelia Huck , Marc Zyngier , Will Deacon , Robin Murphy Subject: [PATCH 1/7] vfio: iommu_type1: Clear added dirty bit when unwind pin Date: Thu, 10 Dec 2020 15:34:19 +0800 Message-ID: <20201210073425.25960-2-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20201210073425.25960-1-zhukeqian1@huawei.com> References: <20201210073425.25960-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.37] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201210_023547_633204_15FA11C1 X-CRM114-Status: GOOD ( 14.49 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Poulose , Catalin Marinas , Joerg Roedel , jiangkunkun@huawei.com, Sean Christopherson , Alexios Zavras , Mark Brown , James Morse , wanghaibin.wang@huawei.com, Thomas Gleixner , Keqian Zhu , Andrew Morton , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently we do not clear added dirty bit of bitmap when unwind pin, so if pin failed at halfway, we set unnecessary dirty bit in bitmap. Clearing added dirty bit when unwind pin, userspace will see less dirty page, which can save much time to handle them. Note that we should distinguish the bits added by pin and the bits already set before pin, so introduce bitmap_added to record this. Signed-off-by: Keqian Zhu Reported-by: kernel test robot Reported-by: Dan Carpenter --- drivers/vfio/vfio_iommu_type1.c | 33 ++++++++++++++++++++++----------- 1 file changed, 22 insertions(+), 11 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 67e827638995..f129d24a6ec3 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -637,7 +637,11 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data, struct vfio_iommu *iommu = iommu_data; struct vfio_group *group; int i, j, ret; + unsigned long pgshift = __ffs(iommu->pgsize_bitmap); unsigned long remote_vaddr; + unsigned long bitmap_offset; + unsigned long *bitmap_added; + dma_addr_t iova; struct vfio_dma *dma; bool do_accounting; @@ -650,6 +654,12 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data, mutex_lock(&iommu->lock); + bitmap_added = bitmap_zalloc(npage, GFP_KERNEL); + if (!bitmap_added) { + ret = -ENOMEM; + goto pin_done; + } + /* Fail if notifier list is empty */ if (!iommu->notifier.head) { ret = -EINVAL; @@ -664,7 +674,6 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data, do_accounting = !IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu); for (i = 0; i < npage; i++) { - dma_addr_t iova; struct vfio_pfn *vpfn; iova = user_pfn[i] << PAGE_SHIFT; @@ -699,14 +708,10 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data, } if (iommu->dirty_page_tracking) { - unsigned long pgshift = __ffs(iommu->pgsize_bitmap); - - /* - * Bitmap populated with the smallest supported page - * size - */ - bitmap_set(dma->bitmap, - (iova - dma->iova) >> pgshift, 1); + /* Populated with the smallest supported page size */ + bitmap_offset = (iova - dma->iova) >> pgshift; + if (!test_and_set_bit(bitmap_offset, dma->bitmap)) + set_bit(i, bitmap_added); } } ret = i; @@ -722,14 +727,20 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data, pin_unwind: phys_pfn[i] = 0; for (j = 0; j < i; j++) { - dma_addr_t iova; - iova = user_pfn[j] << PAGE_SHIFT; dma = vfio_find_dma(iommu, iova, PAGE_SIZE); vfio_unpin_page_external(dma, iova, do_accounting); phys_pfn[j] = 0; + + if (test_bit(j, bitmap_added)) { + bitmap_offset = (iova - dma->iova) >> pgshift; + clear_bit(bitmap_offset, dma->bitmap); + } } pin_done: + if (bitmap_added) + bitmap_free(bitmap_added); + mutex_unlock(&iommu->lock); return ret; } From patchwork Thu Dec 10 07:34:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11963557 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85478C4167B for ; Thu, 10 Dec 2020 07:37:10 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1EF8220727 for ; Thu, 10 Dec 2020 07:37:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1EF8220727 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=lTua+PjUpQrkXyFIy5rWc/AOMWCpQVLBGAUBgftI740=; b=T84eiBYYF6vKU7W+Uh+v2fCDT Z90TEaFoGtLjsqzjntWLm5hSjba2YOINDK9DNyrOCcbAB0iXNxFbszbPcPOYvdFxibOG3J5+SkDu1 RnDrOrQtqL5V5UmNg7dyNXbo0qD5qHQBYgpI2sImqwvZCtFnPcn0fnRlRef6RC4shKgW5I2JbPBQj 0VcydX31mgGnn2fn7MIM+NSqzdJE1LhmX5VFZKY/04yLSbJO1AgnivbH7tgjk3Vt6rOT0dBs6iLGR GhxoLdbAu72C8cSE92Kr8GGUI2iXtrkwDoXaGqf4lAG8IG3jTQB15dGJII6x9rScm4svJFbLHsDoG RFLckiupA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1knGUa-0007TD-Pt; Thu, 10 Dec 2020 07:36:00 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1knGUM-0007B1-Is for linux-arm-kernel@lists.infradead.org; Thu, 10 Dec 2020 07:35:49 +0000 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4Cs5Kg6mXDz7CCV; Thu, 10 Dec 2020 15:35:03 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.37) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.487.0; Thu, 10 Dec 2020 15:35:26 +0800 From: Keqian Zhu To: , , , , , Alex Williamson , Cornelia Huck , Marc Zyngier , Will Deacon , Robin Murphy Subject: [PATCH 2/7] vfio: iommu_type1: Initially set the pinned_page_dirty_scope Date: Thu, 10 Dec 2020 15:34:20 +0800 Message-ID: <20201210073425.25960-3-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20201210073425.25960-1-zhukeqian1@huawei.com> References: <20201210073425.25960-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.37] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201210_023546_975047_0A6874F6 X-CRM114-Status: GOOD ( 12.66 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Poulose , Catalin Marinas , Joerg Roedel , jiangkunkun@huawei.com, Sean Christopherson , Alexios Zavras , Mark Brown , James Morse , wanghaibin.wang@huawei.com, Thomas Gleixner , Keqian Zhu , Andrew Morton , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently there are 3 ways to promote the pinned_page_dirty_scope status of vfio_iommu: 1. Through pin interface. 2. Detach a group without dirty tracking. 3. Attach a group with dirty tracking. For point 3, the only chance to change the pinned status is that the vfio_iommu is newly created. Consider that we can safely set the pinned status when create a new vfio_iommu, as we do it, the point 3 can be removed to reduce operations. Signed-off-by: Keqian Zhu --- drivers/vfio/vfio_iommu_type1.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index f129d24a6ec3..c52bcefba96b 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -2064,12 +2064,8 @@ static int vfio_iommu_type1_attach_group(void *iommu_data, * Non-iommu backed group cannot dirty memory directly, * it can only use interfaces that provide dirty * tracking. - * The iommu scope can only be promoted with the - * addition of a dirty tracking group. */ group->pinned_page_dirty_scope = true; - if (!iommu->pinned_page_dirty_scope) - update_pinned_page_dirty_scope(iommu); mutex_unlock(&iommu->lock); return 0; @@ -2457,6 +2453,7 @@ static void *vfio_iommu_type1_open(unsigned long arg) INIT_LIST_HEAD(&iommu->iova_list); iommu->dma_list = RB_ROOT; iommu->dma_avail = dma_entry_limit; + iommu->pinned_page_dirty_scope = true; mutex_init(&iommu->lock); BLOCKING_INIT_NOTIFIER_HEAD(&iommu->notifier); From patchwork Thu Dec 10 07:34:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11963563 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1727AC433FE for ; Thu, 10 Dec 2020 07:37:17 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A622C20727 for ; Thu, 10 Dec 2020 07:37:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A622C20727 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=UAXdG0GZyPv5rt2HPSQpfyzb+3BGYw2Z3FFkm5t2tMw=; b=VNIPX19RLtZgRb8wnbe7D8IKi PkAPzFuR9a2tikOS5P7FMM/xOH+EPdjkMua7HILfa4kU2fquWJEggocu/Xcoq+XT8914Zc60hhRo9 EYImsFF4rjUoNHSCCt/YMpzm5jdMzhvVKSR/1bGvv1JiMVP7g6Nw2lICjMb9jPoSbj4iowCpsRsfP M5pqCtIuJCf6An19il31QqWcfoX2zzqYnfK25rpL4PtPYS67yz3tSK34iSX1JGze0Vd6DsWu5Qiwz F4BB3RbSsHAT99mmQ6uV/nltO8XwAueD6ugdHeKsnar7O97f/hXp/IUFGwG0r+9npAmuQn4+ceScs 6D/Bhf9Fg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1knGUf-0007Tv-85; Thu, 10 Dec 2020 07:36:05 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1knGUM-0007Ax-GW for linux-arm-kernel@lists.infradead.org; Thu, 10 Dec 2020 07:35:49 +0000 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4Cs5KV1B12zM362; Thu, 10 Dec 2020 15:34:54 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.37) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.487.0; Thu, 10 Dec 2020 15:35:27 +0800 From: Keqian Zhu To: , , , , , Alex Williamson , Cornelia Huck , Marc Zyngier , Will Deacon , Robin Murphy Subject: [PATCH 3/7] vfio: iommu_type1: Make an explicit "promote" semantic Date: Thu, 10 Dec 2020 15:34:21 +0800 Message-ID: <20201210073425.25960-4-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20201210073425.25960-1-zhukeqian1@huawei.com> References: <20201210073425.25960-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.37] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201210_023547_693133_C39531D4 X-CRM114-Status: GOOD ( 13.70 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Poulose , Catalin Marinas , Joerg Roedel , jiangkunkun@huawei.com, Sean Christopherson , Alexios Zavras , Mark Brown , James Morse , wanghaibin.wang@huawei.com, Thomas Gleixner , Keqian Zhu , Andrew Morton , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When we want to promote pinned_page_scope of vfio_iommu, we should call the "update" function to visit all vfio_group, but when we want to downgrade it, we can set the flag directly. Giving above, we can give an explicit "promote" semantic to that function. BTW, if vfio_iommu has been promoted, then it can return early. Signed-off-by: Keqian Zhu --- drivers/vfio/vfio_iommu_type1.c | 27 +++++++++++++-------------- 1 file changed, 13 insertions(+), 14 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index c52bcefba96b..bd9a94590ebc 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -148,7 +148,7 @@ static int put_pfn(unsigned long pfn, int prot); static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu, struct iommu_group *iommu_group); -static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu); +static void promote_pinned_page_dirty_scope(struct vfio_iommu *iommu); /* * This code handles mapping and unmapping of user data buffers * into DMA'ble space using the IOMMU @@ -719,7 +719,7 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data, group = vfio_iommu_find_iommu_group(iommu, iommu_group); if (!group->pinned_page_dirty_scope) { group->pinned_page_dirty_scope = true; - update_pinned_page_dirty_scope(iommu); + promote_pinned_page_dirty_scope(iommu); } goto pin_done; @@ -1633,27 +1633,26 @@ static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu, return group; } -static void update_pinned_page_dirty_scope(struct vfio_iommu *iommu) +static void promote_pinned_page_dirty_scope(struct vfio_iommu *iommu) { struct vfio_domain *domain; struct vfio_group *group; + if (iommu->pinned_page_dirty_scope) + return; + list_for_each_entry(domain, &iommu->domain_list, next) { list_for_each_entry(group, &domain->group_list, next) { - if (!group->pinned_page_dirty_scope) { - iommu->pinned_page_dirty_scope = false; + if (!group->pinned_page_dirty_scope) return; - } } } if (iommu->external_domain) { domain = iommu->external_domain; list_for_each_entry(group, &domain->group_list, next) { - if (!group->pinned_page_dirty_scope) { - iommu->pinned_page_dirty_scope = false; + if (!group->pinned_page_dirty_scope) return; - } } } @@ -2348,7 +2347,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, struct vfio_iommu *iommu = iommu_data; struct vfio_domain *domain; struct vfio_group *group; - bool update_dirty_scope = false; + bool promote_dirty_scope = false; LIST_HEAD(iova_copy); mutex_lock(&iommu->lock); @@ -2356,7 +2355,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, if (iommu->external_domain) { group = find_iommu_group(iommu->external_domain, iommu_group); if (group) { - update_dirty_scope = !group->pinned_page_dirty_scope; + promote_dirty_scope = !group->pinned_page_dirty_scope; list_del(&group->next); kfree(group); @@ -2386,7 +2385,7 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, continue; vfio_iommu_detach_group(domain, group); - update_dirty_scope = !group->pinned_page_dirty_scope; + promote_dirty_scope = !group->pinned_page_dirty_scope; list_del(&group->next); kfree(group); /* @@ -2422,8 +2421,8 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, * Removal of a group without dirty tracking may allow the iommu scope * to be promoted. */ - if (update_dirty_scope) - update_pinned_page_dirty_scope(iommu); + if (promote_dirty_scope) + promote_pinned_page_dirty_scope(iommu); mutex_unlock(&iommu->lock); } From patchwork Thu Dec 10 07:34:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11963555 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BAE04C4361B for ; Thu, 10 Dec 2020 07:37:08 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5743B20727 for ; Thu, 10 Dec 2020 07:37:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5743B20727 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=RuP7+oRhCbE1a7/1L9dSjhrkw/UPMJ2rDviGvncvIMU=; b=20GqvHje7KyltUTuLVvXOPVVb fVLI5g/dtCFIiHeaOsG0e8T8bTxGJdX3OC8MtVCHU3ck6/JO1FQSBrWixrn2v1ax5TgJ8n3bindxx ycudr+2cFAgwVCAlIAOmiBIc7n5a1O4xtx+LoqptWsy8dHKx3VwnnehZoxEQzqm52qBhUTnnWzXEC CekHFOq8Ts5ASe2GQ7Hqu3Y3Ta+mnAiLlCC12GrAkSJWNJtqx2cPN2Gd2Mn4ZpLEQXVVNlX9nJzl/ 4OGLkvZEfO0qoWCeyFSJAmj6GSclXdNV3jFKqzxlzUlE5nsXpPQqvfs01aVRGycKOdIhP0X+y+aHH 96Kc9jLrg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1knGUS-0007Gc-2k; Thu, 10 Dec 2020 07:35:52 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1knGUM-0007Aw-FQ for linux-arm-kernel@lists.infradead.org; Thu, 10 Dec 2020 07:35:47 +0000 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4Cs5KV1ZwPzM365; Thu, 10 Dec 2020 15:34:54 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.37) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.487.0; Thu, 10 Dec 2020 15:35:28 +0800 From: Keqian Zhu To: , , , , , Alex Williamson , Cornelia Huck , Marc Zyngier , Will Deacon , Robin Murphy Subject: [PATCH 4/7] vfio: iommu_type1: Fix missing dirty page when promote pinned_scope Date: Thu, 10 Dec 2020 15:34:22 +0800 Message-ID: <20201210073425.25960-5-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20201210073425.25960-1-zhukeqian1@huawei.com> References: <20201210073425.25960-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.37] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201210_023546_868569_A9A610AC X-CRM114-Status: GOOD ( 12.54 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Poulose , Catalin Marinas , Joerg Roedel , jiangkunkun@huawei.com, Sean Christopherson , Alexios Zavras , Mark Brown , James Morse , wanghaibin.wang@huawei.com, Thomas Gleixner , Keqian Zhu , Andrew Morton , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When we pin or detach a group which is not dirty tracking capable, we will try to promote pinned_scope of vfio_iommu. If we succeed to do so, vfio only report pinned_scope as dirty to userspace next time, but these memory written before pin or detach is missed. The solution is that we must populate all dma range as dirty before promoting pinned_scope of vfio_iommu. Signed-off-by: Keqian Zhu --- drivers/vfio/vfio_iommu_type1.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index bd9a94590ebc..00684597b098 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -1633,6 +1633,20 @@ static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu, return group; } +static void vfio_populate_bitmap_all(struct vfio_iommu *iommu) +{ + struct rb_node *n; + unsigned long pgshift = __ffs(iommu->pgsize_bitmap); + + for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) { + struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node); + unsigned long nbits = dma->size >> pgshift; + + if (dma->iommu_mapped) + bitmap_set(dma->bitmap, 0, nbits); + } +} + static void promote_pinned_page_dirty_scope(struct vfio_iommu *iommu) { struct vfio_domain *domain; @@ -1657,6 +1671,10 @@ static void promote_pinned_page_dirty_scope(struct vfio_iommu *iommu) } iommu->pinned_page_dirty_scope = true; + + /* Set all bitmap to avoid missing dirty page */ + if (iommu->dirty_page_tracking) + vfio_populate_bitmap_all(iommu); } static bool vfio_iommu_has_sw_msi(struct list_head *group_resv_regions, From patchwork Thu Dec 10 07:34:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11963559 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F498C433FE for ; Thu, 10 Dec 2020 07:37:13 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 299DB20727 for ; Thu, 10 Dec 2020 07:37:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 299DB20727 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=odIDoMkgEyVIgcJGYfpyKLi4x43C0Jr3Bw+5uZrVUw8=; b=00Pd8scsHUl+0l3WK5GRa8dka R+fdcfiAJnN2Xxr/VzBX32gNqXx++btUQQbnbzhh5z4InV8B2weRHsm7HpUdhcfalj4+rgDvVyl4y 7cdBhePHZHHIxllFOgnGYVZ19oU3QVnIxOBe5puuIQVwPuGroQfHRuFK+0yZe/hqNlDTZg0yS2CC4 i2Qman9Yi/4HFk+GOGhDKMKZosfBJyLWm3SQFicTjtz4vtAvZ2+qG58gcaddx6vYXm337dHBukDq2 nQIWKFG0VW9Q3pGUkHKmtt5vR+dhbIi9SvdeZlflfp3e1epikdNg5Q1DWGXnJFc3AiC0n8dmBOFd5 oRnX+wbKA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1knGUc-0007TT-LV; Thu, 10 Dec 2020 07:36:02 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1knGUM-0007B2-Fy for linux-arm-kernel@lists.infradead.org; Thu, 10 Dec 2020 07:35:49 +0000 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4Cs5Kg5xvQz7C9b; Thu, 10 Dec 2020 15:35:03 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.37) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.487.0; Thu, 10 Dec 2020 15:35:29 +0800 From: Keqian Zhu To: , , , , , Alex Williamson , Cornelia Huck , Marc Zyngier , Will Deacon , Robin Murphy Subject: [PATCH 5/7] vfio: iommu_type1: Drop parameter "pgsize" of vfio_dma_bitmap_alloc_all Date: Thu, 10 Dec 2020 15:34:23 +0800 Message-ID: <20201210073425.25960-6-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20201210073425.25960-1-zhukeqian1@huawei.com> References: <20201210073425.25960-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.37] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201210_023546_922104_A95AB60B X-CRM114-Status: GOOD ( 11.17 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Poulose , Catalin Marinas , Joerg Roedel , jiangkunkun@huawei.com, Sean Christopherson , Alexios Zavras , Mark Brown , James Morse , wanghaibin.wang@huawei.com, Thomas Gleixner , Keqian Zhu , Andrew Morton , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We always use the smallest supported page size of vfio_iommu as pgsize. Remove parameter "pgsize" of vfio_dma_bitmap_alloc_all. Signed-off-by: Keqian Zhu --- drivers/vfio/vfio_iommu_type1.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 00684597b098..32ab889c8193 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -236,9 +236,10 @@ static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize) } } -static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu, size_t pgsize) +static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu) { struct rb_node *n; + size_t pgsize = (size_t)1 << __ffs(iommu->pgsize_bitmap); for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) { struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node); @@ -2798,12 +2799,9 @@ static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu, return -EINVAL; if (dirty.flags & VFIO_IOMMU_DIRTY_PAGES_FLAG_START) { - size_t pgsize; - mutex_lock(&iommu->lock); - pgsize = 1 << __ffs(iommu->pgsize_bitmap); if (!iommu->dirty_page_tracking) { - ret = vfio_dma_bitmap_alloc_all(iommu, pgsize); + ret = vfio_dma_bitmap_alloc_all(iommu); if (!ret) iommu->dirty_page_tracking = true; } From patchwork Thu Dec 10 07:34:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11963567 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7251C4167B for ; Thu, 10 Dec 2020 07:37:20 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 96D5620727 for ; Thu, 10 Dec 2020 07:37:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 96D5620727 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=QAst7irzP2bkLgd3K6NNRc4QT3J0vC5EPyHqUqazDIc=; b=Z8ry4ryKNfjOIBsh5+a7SLd8b saXt44Z8R3aBP/Vln3haU/Oy9q053gSVi1pkO6z1PhA1JudtlZhxbiM4/AMFkPkCfSeYiaCQh3DVE l9c1tZLNPLVxvBERavfWHg5CYS1PfhMCHAK+QfS0luL0vBdR2AyaO58FuTqseogXhUKGmx/2+GdZO xmbAjCqL6hSfeiR2Xw1XCOHaWPuyp5Sl7y+0osj769QT7qrsk/Q47GNuPmxiXekN4T9YdgCycFA9V LDebc88PP2pvgleY2ELWgxOcXSWq4kYm9GdbnvXBusY48LsoqjSwXvspWI/PMRHnq14gPTC//BJmU LhZqoJLGw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1knGUo-0007Xq-84; Thu, 10 Dec 2020 07:36:14 +0000 Received: from szxga04-in.huawei.com ([45.249.212.190]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1knGUR-0007B7-4W for linux-arm-kernel@lists.infradead.org; Thu, 10 Dec 2020 07:35:54 +0000 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4Cs5Km46jlz15b9Q; Thu, 10 Dec 2020 15:35:08 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.37) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.487.0; Thu, 10 Dec 2020 15:35:30 +0800 From: Keqian Zhu To: , , , , , Alex Williamson , Cornelia Huck , Marc Zyngier , Will Deacon , Robin Murphy Subject: [PATCH 6/7] vfio: iommu_type1: Drop parameter "pgsize" of vfio_iova_dirty_bitmap. Date: Thu, 10 Dec 2020 15:34:24 +0800 Message-ID: <20201210073425.25960-7-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20201210073425.25960-1-zhukeqian1@huawei.com> References: <20201210073425.25960-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.37] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201210_023551_564243_BA6DCE4C X-CRM114-Status: UNSURE ( 9.94 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Poulose , Catalin Marinas , Joerg Roedel , jiangkunkun@huawei.com, Sean Christopherson , Alexios Zavras , Mark Brown , James Morse , wanghaibin.wang@huawei.com, Thomas Gleixner , Keqian Zhu , Andrew Morton , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We always use the smallest supported page size of vfio_iommu as pgsize. Remove parameter "pgsize" of vfio_iova_dirty_bitmap. Signed-off-by: Keqian Zhu --- drivers/vfio/vfio_iommu_type1.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 32ab889c8193..2d7a5cd9b916 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -1026,11 +1026,12 @@ static int update_user_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, } static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, - dma_addr_t iova, size_t size, size_t pgsize) + dma_addr_t iova, size_t size) { struct vfio_dma *dma; struct rb_node *n; - unsigned long pgshift = __ffs(pgsize); + unsigned long pgshift = __ffs(iommu->pgsize_bitmap); + size_t pgsize = (size_t)1 << pgshift; int ret; /* @@ -2861,8 +2862,7 @@ static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu, if (iommu->dirty_page_tracking) ret = vfio_iova_dirty_bitmap(range.bitmap.data, iommu, range.iova, - range.size, - range.bitmap.pgsize); + range.size); else ret = -EINVAL; out_unlock: From patchwork Thu Dec 10 07:34:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11963569 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3FF40C4361B for ; Thu, 10 Dec 2020 07:37:24 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E3FA220727 for ; Thu, 10 Dec 2020 07:37:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E3FA220727 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Ax4cIYq94JGct8Ara3RklBXSXi+ccgwJd15+ntpekhI=; b=HuQkcgsCuQ+4LcIwuVcLl2Re7 Dfc1EiR0GX2Yg/hn9OK2UV4qV8cRWIyNZV0gTBM9VUr2yyQ+4FSURg3dMqZep9/1G5vQmbcXwGz1q aRpZgJe3YQssXxrOuWJxeVACpK1OO5qgWiF5Q032qChvZKKm0WZJCxAHKwMGCqcN+0c7bNswjm2m7 u/GfiDjkG/hZcrS/8MK3xtmcIB+PGBVchkzEkdfOCEMPf4uoGzDZ8JBlyK2O1ZFKAUIMI8enZk0bN Etva9f4SH+1mf3Nb+OeJ72Kjs+56cIqS/4ADckZkUGeHReM8HltyEk1bH1hbeSGplBVleM8lxwMPj 6cTpWRevg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1knGUs-0007a7-Sp; Thu, 10 Dec 2020 07:36:19 +0000 Received: from szxga04-in.huawei.com ([45.249.212.190]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1knGUR-0007B6-4T for linux-arm-kernel@lists.infradead.org; Thu, 10 Dec 2020 07:35:55 +0000 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4Cs5Km4WnFz15b9R; Thu, 10 Dec 2020 15:35:08 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.37) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.487.0; Thu, 10 Dec 2020 15:35:31 +0800 From: Keqian Zhu To: , , , , , Alex Williamson , Cornelia Huck , Marc Zyngier , Will Deacon , Robin Murphy Subject: [PATCH 7/7] vfio: iommu_type1: Drop parameter "pgsize" of update_user_bitmap Date: Thu, 10 Dec 2020 15:34:25 +0800 Message-ID: <20201210073425.25960-8-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20201210073425.25960-1-zhukeqian1@huawei.com> References: <20201210073425.25960-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.37] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201210_023551_558137_33C89E0D X-CRM114-Status: GOOD ( 11.75 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Poulose , Catalin Marinas , Joerg Roedel , jiangkunkun@huawei.com, Sean Christopherson , Alexios Zavras , Mark Brown , James Morse , wanghaibin.wang@huawei.com, Thomas Gleixner , Keqian Zhu , Andrew Morton , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We always use the smallest supported page size of vfio_iommu as pgsize. Drop parameter "pgsize" of update_user_bitmap. Signed-off-by: Keqian Zhu --- drivers/vfio/vfio_iommu_type1.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 2d7a5cd9b916..edb0a6468e8d 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -989,10 +989,9 @@ static void vfio_update_pgsize_bitmap(struct vfio_iommu *iommu) } static int update_user_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, - struct vfio_dma *dma, dma_addr_t base_iova, - size_t pgsize) + struct vfio_dma *dma, dma_addr_t base_iova) { - unsigned long pgshift = __ffs(pgsize); + unsigned long pgshift = __ffs(iommu->pgsize_bitmap); unsigned long nbits = dma->size >> pgshift; unsigned long bit_offset = (dma->iova - base_iova) >> pgshift; unsigned long copy_offset = bit_offset / BITS_PER_LONG; @@ -1057,7 +1056,7 @@ static int vfio_iova_dirty_bitmap(u64 __user *bitmap, struct vfio_iommu *iommu, if (dma->iova > iova + size - 1) break; - ret = update_user_bitmap(bitmap, iommu, dma, iova, pgsize); + ret = update_user_bitmap(bitmap, iommu, dma, iova); if (ret) return ret; @@ -1203,7 +1202,7 @@ static int vfio_dma_do_unmap(struct vfio_iommu *iommu, if (unmap->flags & VFIO_DMA_UNMAP_FLAG_GET_DIRTY_BITMAP) { ret = update_user_bitmap(bitmap->data, iommu, dma, - unmap->iova, pgsize); + unmap->iova); if (ret) break; }