From patchwork Mon Feb 9 11:19:21 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tomasz Figa X-Patchwork-Id: 5800891 Return-Path: X-Original-To: patchwork-linux-rockchip@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id AEC1ABF440 for ; Mon, 9 Feb 2015 11:20:25 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id DA4C920121 for ; Mon, 9 Feb 2015 11:20:24 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 142DD20120 for ; Mon, 9 Feb 2015 11:20:24 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YKmO7-0004DM-8Z; Mon, 09 Feb 2015 11:20:23 +0000 Received: from mail-pd0-f179.google.com ([209.85.192.179]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YKmNm-0002vJ-Kp for linux-rockchip@lists.infradead.org; Mon, 09 Feb 2015 11:20:03 +0000 Received: by pdno5 with SMTP id o5so8266717pdn.8 for ; Mon, 09 Feb 2015 03:19:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id; bh=bJoQFnq4ZWs4X6PRz0arJ9aUQGUhgkZ6Su2Gi3fHXws=; b=Bl7ZqVnZFmhaUegEPbd1dPTWp4wFoyZGt21cu0r4Ns+fwdGnmS5zksEOYWQtRvBtJq h8UBXcCoGBJ0+tUh/ht2pAmtvVKVC5cGrxMY5myzHT+3szr1nGqNx4hAAZecP79+FpS7 iQmV3KoRZ2BOOiNgcb8w3jwjk9bYpWDaYV76M= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=bJoQFnq4ZWs4X6PRz0arJ9aUQGUhgkZ6Su2Gi3fHXws=; b=F3Jfwq10a6QPZEhcoUtKn5JfvOzO8bMn9L8u2dUwux1j7o4g/oAmTk3yefD+RFjxEC GrBFAPaBNpwbilynrAqf9e4a2vel9NJJkZOkGXQn8WkbxQjLVqr1oLg+WKYxBhgCuZuZ FEn0iGz6Tx25XWqJd8ba/5OvhIHtYdhdmLgk4phRYW6JWHXyCX0JYWHHWplDf/cGMzN1 J3oGEG6cF/wgMS7l1EepXQanBbR54AVA0T7scB7ZbWwKiRXfOAY9TMXdLYC2elPuGGW/ wU4WzhbAXN12ZPcWuhAb4lw4Ih6p7w7MlMvN1nfpslDowZpjF8U4N8Oo3nhwmVc2IuyS 1/+w== X-Gm-Message-State: ALoCoQmFZNF00rhSJ6Stziql5y9lwWfjUMrgn1sdWu3NA5FawEblcGVseFN+8qn8yPawkZ0+PnDz X-Received: by 10.68.191.233 with SMTP id hb9mr22331702pbc.111.1423480780816; Mon, 09 Feb 2015 03:19:40 -0800 (PST) Received: from basement.tok.corp.google.com ([172.23.69.229]) by mx.google.com with ESMTPSA id ut3sm16041369pbc.25.2015.02.09.03.19.38 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 09 Feb 2015 03:19:39 -0800 (PST) From: Tomasz Figa To: iommu@lists.linux-foundation.org Subject: [PATCH] CHROMIUM: iommu: rockchip: Make sure that page table state is coherent Date: Mon, 9 Feb 2015 20:19:21 +0900 Message-Id: <1423480761-33453-1-git-send-email-tfiga@chromium.org> X-Mailer: git-send-email 2.2.0.rc0.207.ga3a616c X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150209_032002_743641_7C254E25 X-CRM114-Status: GOOD ( 11.91 ) X-Spam-Score: -0.8 (/) Cc: Heiko Stuebner , Joerg Roedel , linux-kernel@vger.kernel.org, Daniel Kurtz , Tomasz Figa , linux-rockchip@lists.infradead.org, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-rockchip@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Upstream kernel work for Rockchip platforms List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "Linux-rockchip" Errors-To: linux-rockchip-bounces+patchwork-linux-rockchip=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Even though the code uses the dt_lock spin lock to serialize mapping operation from different threads, it does not protect from IOMMU accesses that might be already taking place and thus altering state of the IOTLB. This means that current mapping code which first zaps the page table and only then updates it with new mapping which is prone to mentioned race. In addition, current code assumes that mappings are always > 4 MiB (which translates to 1024 PTEs) and so they would always occupy entire page tables. This is not true for mappings created by V4L2 Videobuf2 DMA contig allocator. This patch changes the mapping code to always zap the page table after it is updated, which avoids the aforementioned race and also zap the last page of the mapping to make sure that stale data is not cached from an already existing mapping. Signed-off-by: Tomasz Figa Reviewed-by: Daniel Kurtz Tested-by: Heiko Stuebner --- drivers/iommu/rockchip-iommu.c | 23 +++++++++++++++++------ 1 file changed, 17 insertions(+), 6 deletions(-) diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c index 6a8b1ec..b06fe76 100644 --- a/drivers/iommu/rockchip-iommu.c +++ b/drivers/iommu/rockchip-iommu.c @@ -544,6 +544,15 @@ static void rk_iommu_zap_iova(struct rk_iommu_domain *rk_domain, spin_unlock_irqrestore(&rk_domain->iommus_lock, flags); } +static void rk_iommu_zap_iova_first_last(struct rk_iommu_domain *rk_domain, + dma_addr_t iova, size_t size) +{ + rk_iommu_zap_iova(rk_domain, iova, SPAGE_SIZE); + if (size > SPAGE_SIZE) + rk_iommu_zap_iova(rk_domain, iova + size - SPAGE_SIZE, + SPAGE_SIZE); +} + static u32 *rk_dte_get_page_table(struct rk_iommu_domain *rk_domain, dma_addr_t iova) { @@ -568,12 +577,6 @@ static u32 *rk_dte_get_page_table(struct rk_iommu_domain *rk_domain, rk_table_flush(page_table, NUM_PT_ENTRIES); rk_table_flush(dte_addr, 1); - /* - * Zap the first iova of newly allocated page table so iommu evicts - * old cached value of new dte from the iotlb. - */ - rk_iommu_zap_iova(rk_domain, iova, SPAGE_SIZE); - done: pt_phys = rk_dte_pt_address(dte); return (u32 *)phys_to_virt(pt_phys); @@ -623,6 +626,14 @@ static int rk_iommu_map_iova(struct rk_iommu_domain *rk_domain, u32 *pte_addr, rk_table_flush(pte_addr, pte_count); + /* + * Zap the first and last iova to evict from iotlb any previously + * mapped cachelines holding stale values for its dte and pte. + * We only zap the first and last iova, since only they could have + * dte or pte shared with an existing mapping. + */ + rk_iommu_zap_iova_first_last(rk_domain, iova, size); + return 0; unwind: /* Unmap the range of iovas that we just mapped */