From patchwork Thu Dec 10 07:34:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11963565 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A7C2C4361B for ; Thu, 10 Dec 2020 07:37:19 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CEBDC20727 for ; Thu, 10 Dec 2020 07:37:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CEBDC20727 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=TD1RR+p3af1tFQFlMFF4k7zhS2HcFa23lYGGI5IR+VI=; b=0v1ZCjV/RS6yNC55XFby3eY3z 6JfX2uF6j8hitEsuGepWL4QdndY0x/zELfa9JUZNa9mT/A+REs/Tb6q8wdUtsNtzaMv3kS293478+ 3pVQnyZGLsMQeTkiINiieKuyNfKK9cgjZZ/3Sd/U1PIGMlF9Qxhg4yeDy8Oym2ePZcQdmO/ewyUBK CPSw7hsIl3uU1EOyVWCE+suc83VjgcjYYQA8E/mNMed/Wr6rdj2tcN5wMXNCTbG2Z1OmSwbBXZqcT jOGJ7iwuZWGfsnBaaJ+eRPsq4bPEs263uMMxkChtQzZWXbJdLdcjhHA7sB6tcNblR6NYFDpyzSqM0 0ji1au1uw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1knGUj-0007VP-Pp; Thu, 10 Dec 2020 07:36:10 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1knGUM-0007B0-H2 for linux-arm-kernel@lists.infradead.org; Thu, 10 Dec 2020 07:35:49 +0000 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4Cs5Kg6N1tz7CCB; Thu, 10 Dec 2020 15:35:03 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.37) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.487.0; Thu, 10 Dec 2020 15:35:25 +0800 From: Keqian Zhu To: , , , , , Alex Williamson , Cornelia Huck , Marc Zyngier , Will Deacon , Robin Murphy Subject: [PATCH 1/7] vfio: iommu_type1: Clear added dirty bit when unwind pin Date: Thu, 10 Dec 2020 15:34:19 +0800 Message-ID: <20201210073425.25960-2-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20201210073425.25960-1-zhukeqian1@huawei.com> References: <20201210073425.25960-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.37] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201210_023547_633204_15FA11C1 X-CRM114-Status: GOOD ( 14.49 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Poulose , Catalin Marinas , Joerg Roedel , jiangkunkun@huawei.com, Sean Christopherson , Alexios Zavras , Mark Brown , James Morse , wanghaibin.wang@huawei.com, Thomas Gleixner , Keqian Zhu , Andrew Morton , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently we do not clear added dirty bit of bitmap when unwind pin, so if pin failed at halfway, we set unnecessary dirty bit in bitmap. Clearing added dirty bit when unwind pin, userspace will see less dirty page, which can save much time to handle them. Note that we should distinguish the bits added by pin and the bits already set before pin, so introduce bitmap_added to record this. Signed-off-by: Keqian Zhu Reported-by: kernel test robot Reported-by: Dan Carpenter --- drivers/vfio/vfio_iommu_type1.c | 33 ++++++++++++++++++++++----------- 1 file changed, 22 insertions(+), 11 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 67e827638995..f129d24a6ec3 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -637,7 +637,11 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data, struct vfio_iommu *iommu = iommu_data; struct vfio_group *group; int i, j, ret; + unsigned long pgshift = __ffs(iommu->pgsize_bitmap); unsigned long remote_vaddr; + unsigned long bitmap_offset; + unsigned long *bitmap_added; + dma_addr_t iova; struct vfio_dma *dma; bool do_accounting; @@ -650,6 +654,12 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data, mutex_lock(&iommu->lock); + bitmap_added = bitmap_zalloc(npage, GFP_KERNEL); + if (!bitmap_added) { + ret = -ENOMEM; + goto pin_done; + } + /* Fail if notifier list is empty */ if (!iommu->notifier.head) { ret = -EINVAL; @@ -664,7 +674,6 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data, do_accounting = !IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu); for (i = 0; i < npage; i++) { - dma_addr_t iova; struct vfio_pfn *vpfn; iova = user_pfn[i] << PAGE_SHIFT; @@ -699,14 +708,10 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data, } if (iommu->dirty_page_tracking) { - unsigned long pgshift = __ffs(iommu->pgsize_bitmap); - - /* - * Bitmap populated with the smallest supported page - * size - */ - bitmap_set(dma->bitmap, - (iova - dma->iova) >> pgshift, 1); + /* Populated with the smallest supported page size */ + bitmap_offset = (iova - dma->iova) >> pgshift; + if (!test_and_set_bit(bitmap_offset, dma->bitmap)) + set_bit(i, bitmap_added); } } ret = i; @@ -722,14 +727,20 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data, pin_unwind: phys_pfn[i] = 0; for (j = 0; j < i; j++) { - dma_addr_t iova; - iova = user_pfn[j] << PAGE_SHIFT; dma = vfio_find_dma(iommu, iova, PAGE_SIZE); vfio_unpin_page_external(dma, iova, do_accounting); phys_pfn[j] = 0; + + if (test_bit(j, bitmap_added)) { + bitmap_offset = (iova - dma->iova) >> pgshift; + clear_bit(bitmap_offset, dma->bitmap); + } } pin_done: + if (bitmap_added) + bitmap_free(bitmap_added); + mutex_unlock(&iommu->lock); return ret; }