From patchwork Tue Jul 25 17:26:54 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 9862573 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id EAE55603F9 for ; Tue, 25 Jul 2017 17:30:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D9BC5286EB for ; Tue, 25 Jul 2017 17:30:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C39BD27F85; Tue, 25 Jul 2017 17:30:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.6 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, RCVD_IN_SORBS_SPAM, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 0DFC2286FE for ; Tue, 25 Jul 2017 17:30:04 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1da3cF-0000Di-5Z; Tue, 25 Jul 2017 17:27:27 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1da3cD-0000BT-Ei for xen-devel@lists.xenproject.org; Tue, 25 Jul 2017 17:27:25 +0000 Received: from [193.109.254.147] by server-8.bemta-6.messagelabs.com id 56/D7-09901-C7F77795; Tue, 25 Jul 2017 17:27:24 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrEIsWRWlGSWpSXmKPExsVyMfS6o25NfXm kwUF2i+9bJjM5MHoc/nCFJYAxijUzLym/IoE1Y9aSTSwFr0wrVm1+xd7AuES5i5GLQ0hgBqPE 5vsHWUEcFoGXLBJH1raygTgSAv2sEn+X7ALKcAI5cRKvHpxigbArJHZdesAGYgsJKEm83rmZC WLUbCaJZ0tmgRWxCRhI7H/3hB3EFgEqurdqMhOIzSxQJ/Hp8iUwW1ggSaL33y0wm0VAVeLn0V eMIDavgItE54XLUIvlJG6e62QGsTmB4ls/bWOBWOwssf7YAeYJjAILGBlWMWoUpxaVpRbpGpr rJRVlpmeU5CZm5ugaGpjp5aYWFyemp+YkJhXrJefnbmIEhhYDEOxgvL0x4BCjJAeTkijvN93y SCG+pPyUyozE4oz4otKc1OJDjDIcHEoSvIZ1QDnBotT01Iq0zBxgkMOkJTh4lER4X9UCpXmLC xJzizPTIVKnGO05rlxZ94WJY8qB7UDy1YT/35g4mr5//M4kxJKXn5cqJc7rAzJVAKQtozQPbi gsKi8xykoJ8zICnSnEU5BalJtZgir/ilGcg1FJmFcdZApPZl4J3O5XQGcxAZ01Z0YpyFkliQg pqQZGEckZDd3SDkf9zhlNW/hFXzrvANOLyj2v3D250or3FW87UeQ/T84y6+OhiqCEOmsbae34 ye2CPVyvlijx+0XemcL06HxpkMyews+RqcE/nTV6855zqXV4xzV8NQtWd5ggzc53cNaq2LTeS Z9d2PefZvvLecl6p8jG9X6HLr06eyHYcdvd8vNKLMUZiYZazEXFiQDMXrgBxQIAAA== X-Env-Sender: olekstysh@gmail.com X-Msg-Ref: server-2.tower-27.messagelabs.com!1501003643!49264435!1 X-Originating-IP: [209.85.215.65] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 9.4.25; banners=-,-,- X-VirusChecked: Checked Received: (qmail 55094 invoked from network); 25 Jul 2017 17:27:23 -0000 Received: from mail-lf0-f65.google.com (HELO mail-lf0-f65.google.com) (209.85.215.65) by server-2.tower-27.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 25 Jul 2017 17:27:23 -0000 Received: by mail-lf0-f65.google.com with SMTP id t128so5040549lff.3 for ; Tue, 25 Jul 2017 10:27:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=1V3XWhPfAucD2dwKdhs8IVOyq1CAK+O7ZqQVKDtctVU=; b=kTiFD05J+F8MO6j8yUhM9VSTirmUaH+QqA1MTM5WBQq1GMFwHGC699lOoumGcbIJgK fYXK8eBWZ9q5vPojgKNdwKlUuaqSFom3C0NlztqqXpsWfBoV7pMnp2w1WDr/4dU3Y0kE 9A9r1w1IqeuN8CTYZ+3SSA1nEBhO6p7S/Fb957r5c7mAdWDzHEPIUgT9sgL5hWwUEkQV 0tDR9sMvjG+GwmYeId0zL9cuaVAYhvjG6HLSrzmp5M6Tkowzta9ANbAIUncMF9on3qBZ YTYQcuIV8s0oKb7Vf3avamzao6IrOec1hIQUXmtJtge6bvDJTT/3ybKqdHYmQ2yXQ6zX ip8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=1V3XWhPfAucD2dwKdhs8IVOyq1CAK+O7ZqQVKDtctVU=; b=Ywtocsx71JAacBx3RX74X3c8awqrd1PG5ypmNRN6yT6M30xACaQoooRU3Oh+aP9aYR 81wUkFrZJaFOTJCeTFEf5C5QY8grfKNq2KZEqkSeZZ6aGYrmX2PGJ6QeFrNshuVzW435 mUb8nee4KPl2QOk1mYgIs+jvfyWLhbiDwTgm+J2CHgThp+6aVQ/KxqeCpeszkaHFfDjb 1YWxngY4R40XsZ14yRML+h90TRqGOSymtyB9HDEaUVYtuWfA0Dio5SFXfvx1uQzwxDza Abh6DwehjOUobdEP7rItB2LC/AC5VTPw6H7T85vN2jmaNfX5XUtmUm4VYihR6OPuRBbH LKyg== X-Gm-Message-State: AIVw111XFRDMyYN8qM0OW7Mn1uR/hvuuv9v7zImYoHmWas0mHDs4vL7D kKF/cHuY33e104zd X-Received: by 10.25.17.20 with SMTP id g20mr5883476lfi.122.1501003642876; Tue, 25 Jul 2017 10:27:22 -0700 (PDT) Received: from otyshchenko.kyiv.epam.com (ll-59.209.223.85.sovam.net.ua. [85.223.209.59]) by smtp.gmail.com with ESMTPSA id v145sm411748lfa.21.2017.07.25.10.27.21 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 25 Jul 2017 10:27:22 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Date: Tue, 25 Jul 2017 20:26:54 +0300 Message-Id: <1501003615-15274-13-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1501003615-15274-1-git-send-email-olekstysh@gmail.com> References: <1501003615-15274-1-git-send-email-olekstysh@gmail.com> Cc: Oleksandr Tyshchenko , Kevin Tian , Jan Beulich Subject: [Xen-devel] [PATCH v2 12/13] [RFC] iommu: VT-d: Squash map_pages/unmap_pages with map_page/unmap_page X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Oleksandr Tyshchenko Reduce the scope of the TODO by squashing single-page stuff with multi-page one. Next target is to use large pages whenever possible in the case that hardware supports them. Signed-off-by: Oleksandr Tyshchenko CC: Jan Beulich CC: Kevin Tian --- Changes in v1: - Changes in v2: - --- xen/drivers/passthrough/vtd/iommu.c | 138 +++++++++++++++++------------------- 1 file changed, 67 insertions(+), 71 deletions(-) diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c index 45d1f36..d20b2f9 100644 --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -1750,15 +1750,24 @@ static void iommu_domain_teardown(struct domain *d) spin_unlock(&hd->arch.mapping_lock); } -static int __must_check intel_iommu_map_page(struct domain *d, - unsigned long gfn, - unsigned long mfn, - unsigned int flags) +static int __must_check intel_iommu_unmap_pages(struct domain *d, + unsigned long gfn, + unsigned int order); + +/* + * TODO: Optimize by using large pages whenever possible in the case + * that hardware supports them. + */ +static int __must_check intel_iommu_map_pages(struct domain *d, + unsigned long gfn, + unsigned long mfn, + unsigned int order, + unsigned int flags) { struct domain_iommu *hd = dom_iommu(d); - struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 }; - u64 pg_maddr; int rc = 0; + unsigned long orig_gfn = gfn; + unsigned long i; /* Do nothing if VT-d shares EPT page table */ if ( iommu_use_hap_pt(d) ) @@ -1768,78 +1777,60 @@ static int __must_check intel_iommu_map_page(struct domain *d, if ( iommu_passthrough && is_hardware_domain(d) ) return 0; - spin_lock(&hd->arch.mapping_lock); - - pg_maddr = addr_to_dma_page_maddr(d, (paddr_t)gfn << PAGE_SHIFT_4K, 1); - if ( pg_maddr == 0 ) + for ( i = 0; i < (1UL << order); i++, gfn++, mfn++ ) { - spin_unlock(&hd->arch.mapping_lock); - return -ENOMEM; - } - page = (struct dma_pte *)map_vtd_domain_page(pg_maddr); - pte = page + (gfn & LEVEL_MASK); - old = *pte; - dma_set_pte_addr(new, (paddr_t)mfn << PAGE_SHIFT_4K); - dma_set_pte_prot(new, - ((flags & IOMMUF_readable) ? DMA_PTE_READ : 0) | - ((flags & IOMMUF_writable) ? DMA_PTE_WRITE : 0)); + struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 }; + u64 pg_maddr; - /* Set the SNP on leaf page table if Snoop Control available */ - if ( iommu_snoop ) - dma_set_pte_snp(new); + spin_lock(&hd->arch.mapping_lock); - if ( old.val == new.val ) - { + pg_maddr = addr_to_dma_page_maddr(d, (paddr_t)gfn << PAGE_SHIFT_4K, 1); + if ( pg_maddr == 0 ) + { + spin_unlock(&hd->arch.mapping_lock); + rc = -ENOMEM; + goto err; + } + page = (struct dma_pte *)map_vtd_domain_page(pg_maddr); + pte = page + (gfn & LEVEL_MASK); + old = *pte; + dma_set_pte_addr(new, (paddr_t)mfn << PAGE_SHIFT_4K); + dma_set_pte_prot(new, + ((flags & IOMMUF_readable) ? DMA_PTE_READ : 0) | + ((flags & IOMMUF_writable) ? DMA_PTE_WRITE : 0)); + + /* Set the SNP on leaf page table if Snoop Control available */ + if ( iommu_snoop ) + dma_set_pte_snp(new); + + if ( old.val == new.val ) + { + spin_unlock(&hd->arch.mapping_lock); + unmap_vtd_domain_page(page); + continue; + } + *pte = new; + + iommu_flush_cache_entry(pte, sizeof(struct dma_pte)); spin_unlock(&hd->arch.mapping_lock); unmap_vtd_domain_page(page); - return 0; - } - *pte = new; - - iommu_flush_cache_entry(pte, sizeof(struct dma_pte)); - spin_unlock(&hd->arch.mapping_lock); - unmap_vtd_domain_page(page); - if ( !this_cpu(iommu_dont_flush_iotlb) ) - rc = iommu_flush_iotlb(d, gfn, dma_pte_present(old), 1); - - return rc; -} - -static int __must_check intel_iommu_unmap_page(struct domain *d, - unsigned long gfn) -{ - /* Do nothing if hardware domain and iommu supports pass thru. */ - if ( iommu_passthrough && is_hardware_domain(d) ) - return 0; - - return dma_pte_clear_one(d, (paddr_t)gfn << PAGE_SHIFT_4K); -} - -/* TODO: Optimize by squashing map_pages/unmap_pages with map_page/unmap_page */ -static int __must_check intel_iommu_map_pages(struct domain *d, - unsigned long gfn, - unsigned long mfn, - unsigned int order, - unsigned int flags) -{ - unsigned long i; - int rc = 0; - - for ( i = 0; i < (1UL << order); i++ ) - { - rc = intel_iommu_map_page(d, gfn + i, mfn + i, flags); - if ( unlikely(rc) ) + if ( !this_cpu(iommu_dont_flush_iotlb) ) { - while ( i-- ) - /* If statement to satisfy __must_check. */ - if ( intel_iommu_unmap_page(d, gfn + i) ) - continue; - - break; + rc = iommu_flush_iotlb(d, gfn, dma_pte_present(old), 1); + if ( rc ) + goto err; } } + return 0; + +err: + while ( i-- ) + /* If statement to satisfy __must_check. */ + if ( intel_iommu_unmap_pages(d, orig_gfn + i, 0) ) + continue; + return rc; } @@ -1847,12 +1838,17 @@ static int __must_check intel_iommu_unmap_pages(struct domain *d, unsigned long gfn, unsigned int order) { - unsigned long i; int rc = 0; + unsigned long i; + + /* Do nothing if hardware domain and iommu supports pass thru. */ + if ( iommu_passthrough && is_hardware_domain(d) ) + return 0; - for ( i = 0; i < (1UL << order); i++ ) + for ( i = 0; i < (1UL << order); i++, gfn++ ) { - int ret = intel_iommu_unmap_page(d, gfn + i); + int ret = dma_pte_clear_one(d, (paddr_t)gfn << PAGE_SHIFT_4K); + if ( !rc ) rc = ret; }