From patchwork Tue Apr 13 08:54:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12199703 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68562C433B4 for ; Tue, 13 Apr 2021 09:03:05 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D88E661029 for ; Tue, 13 Apr 2021 09:03:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D88E661029 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=VC2iCgLCOewmhg/6g0myhJMNERUxKoE5gQH4qmi6O/k=; b=BH/6YwoSYVLiecbHPm1jVZxkP pmhUIaPpk+1GqCmVEcC/+OMZO5CUd0oiuBgTbGs7Oq5fUgPwEwBzYySx1ErToF/71YWi9EUuIPg3G wzxgk6hYjA406CAir03y+4Rm+WQcUQ5JgWswK/WirHk6S9yAPhGnHEyaRf7xGTxRUtM3FFTa0v+Gg 0JldUROH9xcKVmeO3kjp54IrXrMZOIK6EAGY+/S9+Gt4GPAIFLDPvHTquKceabIk1mSB3/oyVn+8y G3i4MH+D1ROgT8TpuPFXwDC6sK/aeUkqwmbTyIwJTWqaBv0Sqokr4SoIGfibOkrQk89/lzepwFjgr lZXVa8H3Q==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lWEu0-008clH-QP; Tue, 13 Apr 2021 09:00:09 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWEpW-008c58-6N for linux-arm-kernel@desiato.infradead.org; Tue, 13 Apr 2021 08:56:05 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=Csv+A+mpic6L1sTVzy96i1IL+6i2hAACIIQK4yVYO2A=; b=efWroAkbIEqZ8Og20k4XqMRpC0 O3gQyDitS9N/jS+2QXv1iDql3kFs2h1HQcOSjAfTF07Omm3c5ElKAMRqt7PMb4FsFo9beS2sp69Nj CfX+jLgPUASHQ/2m6n01ud0AbM0UIj2MxzDvW9xLSlDv6WiMabKk+kxCkPI0jtcylnUoXHiDMCbhn pNRaq9v6dByi2DuNccrO6YFai7rIwaV0EQ0boozifzgT5R+un7FieyS7g8+LGqSMrD+CZMJ6s7R7R rn85HUBj4VXb3KErle0rOXe6KkEbaT/aSD+HPS66QTRxqmf6mncHGjV9pU4eDY+5xlGjd1m9dGroo voAqWQzg==; Received: from szxga05-in.huawei.com ([45.249.212.191]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lWEpT-006qjc-6A for linux-arm-kernel@lists.infradead.org; Tue, 13 Apr 2021 08:55:29 +0000 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4FKK9k0wc6zPqbY; Tue, 13 Apr 2021 16:52:26 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.224) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.498.0; Tue, 13 Apr 2021 16:55:07 +0800 From: Keqian Zhu To: , , , Robin Murphy , "Will Deacon" , Joerg Roedel , Yi Sun , Jean-Philippe Brucker , Jonathan Cameron , Tian Kevin , Lu Baolu CC: Alex Williamson , Cornelia Huck , Kirti Wankhede , , , , Subject: [PATCH v3 02/12] iommu: Add iommu_split_block interface Date: Tue, 13 Apr 2021 16:54:47 +0800 Message-ID: <20210413085457.25400-3-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210413085457.25400-1-zhukeqian1@huawei.com> References: <20210413085457.25400-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.224] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210413_015527_634816_E470FE5C X-CRM114-Status: GOOD ( 14.43 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Block(largepage) mapping is not a proper granule for dirty log tracking. Take an extreme example, if DMA writes one byte, under 1G mapping, the dirty amount reported is 1G, but under 4K mapping, the dirty amount is just 4K. This adds a new interface named iommu_split_block in IOMMU base layer. A specific IOMMU driver can invoke it during start dirty log. If so, the driver also need to realize the split_block iommu ops. We flush all iotlbs after the whole procedure is completed to ease the pressure of IOMMU, as we will hanle a huge range of mapping in general. Signed-off-by: Keqian Zhu Signed-off-by: Kunkun Jiang --- drivers/iommu/iommu.c | 41 +++++++++++++++++++++++++++++++++++++++++ include/linux/iommu.h | 11 +++++++++++ 2 files changed, 52 insertions(+) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 667b2d6d2fc0..bb413a927870 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -2721,6 +2721,47 @@ int iommu_domain_set_attr(struct iommu_domain *domain, } EXPORT_SYMBOL_GPL(iommu_domain_set_attr); +int iommu_split_block(struct iommu_domain *domain, unsigned long iova, + size_t size) +{ + const struct iommu_ops *ops = domain->ops; + unsigned int min_pagesz; + size_t pgsize; + bool flush = false; + int ret = 0; + + if (unlikely(!ops || !ops->split_block)) + return -ENODEV; + + min_pagesz = 1 << __ffs(domain->pgsize_bitmap); + if (!IS_ALIGNED(iova | size, min_pagesz)) { + pr_err("unaligned: iova 0x%lx size 0x%zx min_pagesz 0x%x\n", + iova, size, min_pagesz); + return -EINVAL; + } + + while (size) { + flush = true; + + pgsize = iommu_pgsize(domain, iova, size); + + ret = ops->split_block(domain, iova, pgsize); + if (ret) + break; + + pr_debug("split handled: iova 0x%lx size 0x%zx\n", iova, pgsize); + + iova += pgsize; + size -= pgsize; + } + + if (flush) + iommu_flush_iotlb_all(domain); + + return ret; +} +EXPORT_SYMBOL_GPL(iommu_split_block); + int iommu_switch_dirty_log(struct iommu_domain *domain, bool enable, unsigned long iova, size_t size, int prot) { diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 7f9ed9f520e2..c6c90ac069e3 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -208,6 +208,7 @@ struct iommu_iotlb_gather { * @device_group: find iommu group for a particular device * @domain_get_attr: Query domain attributes * @domain_set_attr: Change domain attributes + * @split_block: Split block mapping into page mapping * @switch_dirty_log: Perform actions to start|stop dirty log tracking * @sync_dirty_log: Sync dirty log from IOMMU into a dirty bitmap * @clear_dirty_log: Clear dirty log of IOMMU by a mask bitmap @@ -267,6 +268,8 @@ struct iommu_ops { enum iommu_attr attr, void *data); /* Track dirty log */ + int (*split_block)(struct iommu_domain *domain, unsigned long iova, + size_t size); int (*switch_dirty_log)(struct iommu_domain *domain, bool enable, unsigned long iova, size_t size, int prot); int (*sync_dirty_log)(struct iommu_domain *domain, @@ -529,6 +532,8 @@ extern int iommu_domain_get_attr(struct iommu_domain *domain, enum iommu_attr, void *data); extern int iommu_domain_set_attr(struct iommu_domain *domain, enum iommu_attr, void *data); +extern int iommu_split_block(struct iommu_domain *domain, unsigned long iova, + size_t size); extern int iommu_switch_dirty_log(struct iommu_domain *domain, bool enable, unsigned long iova, size_t size, int prot); extern int iommu_sync_dirty_log(struct iommu_domain *domain, unsigned long iova, @@ -929,6 +934,12 @@ static inline int iommu_domain_set_attr(struct iommu_domain *domain, return -EINVAL; } +static inline int iommu_split_block(struct iommu_domain *domain, + unsigned long iova, size_t size) +{ + return -EINVAL; +} + static inline int iommu_switch_dirty_log(struct iommu_domain *domain, bool enable, unsigned long iova, size_t size, int prot)