From patchwork Tue Jul 25 17:26:55 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 9862571 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id AA15E6038F for ; Tue, 25 Jul 2017 17:30:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9CC3128665 for ; Tue, 25 Jul 2017 17:30:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9B4882870C; Tue, 25 Jul 2017 17:30:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.6 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, RCVD_IN_SORBS_SPAM, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 0ABB9286FB for ; Tue, 25 Jul 2017 17:30:04 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1da3cF-0000Eo-I0; Tue, 25 Jul 2017 17:27:27 +0000 Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1da3cE-0000DC-TA for xen-devel@lists.xenproject.org; Tue, 25 Jul 2017 17:27:27 +0000 Received: from [193.109.254.147] by server-9.bemta-6.messagelabs.com id F9/8A-03406-E7F77795; Tue, 25 Jul 2017 17:27:26 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrEIsWRWlGSWpSXmKPExsVyMfS6i25tfXm kwc+XYhbft0xmcmD0OPzhCksAYxRrZl5SfkUCa8bMrQdYCi5HVZz4J97A+MCqi5GLQ0hgGqPE 1q9T2UAcFoGXLBITDs9hBHEkBPpZJc7P/wXkcAI5cRKbnz1mg7CrJNZtescKYgsJKEm83rmZC WLUbCaJRfsXsoMk2AQMJPa/ewJmiwAV3Vs1GayIWWASo8Sl+S0sIAlhgRSJf+8ng01lEVCV2H +wjQnE5hVwkThzezYrxDY5iZvnOplBbE6g+NZP21ggNjtLrD92gHkCo8ACRoZVjOrFqUVlqUW 6FnpJRZnpGSW5iZk5uoYGZnq5qcXFiempOYlJxXrJ+bmbGIGhxQAEOxhnX/Y/xCjJwaQkyvtN tzxSiC8pP6UyI7E4I76oNCe1+BCjDAeHkgSvYR1QTrAoNT21Ii0zBxjkMGkJDh4lEd5XtUBp3 uKCxNzizHSI1ClGe44rV9Z9YeKYcmA7kHw14f83Jo6m7x+/Mwmx5OXnpUqJ834EaRMAacsozY MbCovKS4yyUsK8jEBnCvEUpBblZpagyr9iFOdgVBLmVQe5jSczrwRu9yugs5iAzpozoxTkrJJ EhJRUA2PonyNzVoX7axx5xerxOvbo0v9H2w/d5Nq+UTTluubh9I3Fd+S5P4hpRexUkr3orXRn 9eLsKWYBYifPLb7fssP5xPzo7Qd7yw/smJ7asP3C7BexX5fEh79fUvMh/mb+G57Nc9/p7F7Q+ D7Vqvxn/rr7h1Q7HvxiYY6oEa5YHa1VrJX1OZSvY8YSJZbijERDLeai4kQArCNBRMUCAAA= X-Env-Sender: olekstysh@gmail.com X-Msg-Ref: server-13.tower-27.messagelabs.com!1501003644!99296530!1 X-Originating-IP: [209.85.215.68] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 9.4.25; banners=-,-,- X-VirusChecked: Checked Received: (qmail 38487 invoked from network); 25 Jul 2017 17:27:25 -0000 Received: from mail-lf0-f68.google.com (HELO mail-lf0-f68.google.com) (209.85.215.68) by server-13.tower-27.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 25 Jul 2017 17:27:25 -0000 Received: by mail-lf0-f68.google.com with SMTP id 65so1225247lfa.0 for ; Tue, 25 Jul 2017 10:27:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=AsmTjW1EiDpG1WJ+8z9KDqadHB1z/NJF2maZRn4WJqE=; b=XYfci5/MyFI3sOABQragqNrUuMnUa5AqANr8a/LrRf345irw71GotyOsrMSPXm0PXQ cv12SB0f+QZfKAWxj56jgde7luAo6JBN6BXxI1MERkl82quM+3P1fnVm9hrrxhMhoD+D bW+bGcOjbIVQOqR1AkJr2fyguV4y2twrln/jCFiNVxE6h7whDk8A5sTHlC5xakj7n4dS UpgmiopPKxtCWdP5KMDFBq4mEVNSmSp7MkmOWVqgniVhx/FMbvdjuuuJ1xBiAbm4XNCF iAtlW2ygMWukX+dMNEpenQ0SfLWYs4rLsK9F/mMCQiLdubBqQF58zALlzfmK47XSiC1l 4laA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=AsmTjW1EiDpG1WJ+8z9KDqadHB1z/NJF2maZRn4WJqE=; b=rP7rAA9Zmfv8O5pybcQ5fjD8ceeRCfsYf3A5EFdMMh86S+TQUmyTgOTtVEeS8nJkcQ BgoODKEp/i1t2GYT8f2Qpal/cwCXTtUg8F54+VZgR+dSSUUBt+Bad27VQ12LjwZmKO/9 yzsrlER/ijH4Wo9TjhgVoYKHdB2YNgmCR8uQ2KkVpvo5bTaTXkFqO53UyNMH3rwNkLvX xs7j43TeG4Oj4h9ahuMSwUcB7p/VionV4FC7vV9QcQe8eCpy27FOStLHJHLbiVm54eFu u32kuiqM8t65egp45El2scHvT/xz6Nx8VxFCSSWe4vTKVe5sBOIvkQTw0ZSRsH75zoYp QxnA== X-Gm-Message-State: AIVw111MDyegWW68CTJLk6uZgg+6VvLTbSDYRFjiCbJl/QmsPFDJEZ6w tjivFWnBSBYMvMz0 X-Received: by 10.25.39.198 with SMTP id n189mr7510210lfn.145.1501003644248; Tue, 25 Jul 2017 10:27:24 -0700 (PDT) Received: from otyshchenko.kyiv.epam.com (ll-59.209.223.85.sovam.net.ua. [85.223.209.59]) by smtp.gmail.com with ESMTPSA id v145sm411748lfa.21.2017.07.25.10.27.22 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 25 Jul 2017 10:27:23 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Date: Tue, 25 Jul 2017 20:26:55 +0300 Message-Id: <1501003615-15274-14-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1501003615-15274-1-git-send-email-olekstysh@gmail.com> References: <1501003615-15274-1-git-send-email-olekstysh@gmail.com> Cc: Oleksandr Tyshchenko , Suravee Suthikulpanit , Jan Beulich Subject: [Xen-devel] [PATCH v2 13/13] [RFC] iommu: AMD-Vi: Squash map_pages/unmap_pages with map_page/unmap_page X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Oleksandr Tyshchenko Reduce the scope of the TODO by squashing single-page stuff with multi-page one. Next target is to use large pages whenever possible in the case that hardware supports them. Signed-off-by: Oleksandr Tyshchenko CC: Jan Beulich CC: Suravee Suthikulpanit --- Changes in v1: - Changes in v2: - Signed-off-by: Oleksandr Tyshchenko --- xen/drivers/passthrough/amd/iommu_map.c | 250 ++++++++++++++++---------------- 1 file changed, 121 insertions(+), 129 deletions(-) diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c index ea3a728..22d0cc6 100644 --- a/xen/drivers/passthrough/amd/iommu_map.c +++ b/xen/drivers/passthrough/amd/iommu_map.c @@ -631,188 +631,180 @@ static int update_paging_mode(struct domain *d, unsigned long gfn) return 0; } -static int __must_check amd_iommu_map_page(struct domain *d, unsigned long gfn, - unsigned long mfn, - unsigned int flags) +/* + * TODO: Optimize by using large pages whenever possible in the case + * that hardware supports them. + */ +int __must_check amd_iommu_map_pages(struct domain *d, unsigned long gfn, + unsigned long mfn, + unsigned int order, + unsigned int flags) { - bool_t need_flush = 0; struct domain_iommu *hd = dom_iommu(d); int rc; - unsigned long pt_mfn[7]; - unsigned int merge_level; + unsigned long orig_gfn = gfn; + unsigned long i; if ( iommu_use_hap_pt(d) ) return 0; - memset(pt_mfn, 0, sizeof(pt_mfn)); - spin_lock(&hd->arch.mapping_lock); - rc = amd_iommu_alloc_root(hd); + spin_unlock(&hd->arch.mapping_lock); if ( rc ) { - spin_unlock(&hd->arch.mapping_lock); AMD_IOMMU_DEBUG("Root table alloc failed, gfn = %lx\n", gfn); domain_crash(d); return rc; } - /* Since HVM domain is initialized with 2 level IO page table, - * we might need a deeper page table for lager gfn now */ - if ( is_hvm_domain(d) ) + for ( i = 0; i < (1UL << order); i++, gfn++, mfn++ ) { - if ( update_paging_mode(d, gfn) ) + bool_t need_flush = 0; + unsigned long pt_mfn[7]; + unsigned int merge_level; + + memset(pt_mfn, 0, sizeof(pt_mfn)); + + spin_lock(&hd->arch.mapping_lock); + + /* Since HVM domain is initialized with 2 level IO page table, + * we might need a deeper page table for lager gfn now */ + if ( is_hvm_domain(d) ) + { + if ( update_paging_mode(d, gfn) ) + { + spin_unlock(&hd->arch.mapping_lock); + AMD_IOMMU_DEBUG("Update page mode failed gfn = %lx\n", gfn); + domain_crash(d); + rc = -EFAULT; + goto err; + } + } + + if ( iommu_pde_from_gfn(d, gfn, pt_mfn) || (pt_mfn[1] == 0) ) { spin_unlock(&hd->arch.mapping_lock); - AMD_IOMMU_DEBUG("Update page mode failed gfn = %lx\n", gfn); + AMD_IOMMU_DEBUG("Invalid IO pagetable entry gfn = %lx\n", gfn); domain_crash(d); - return -EFAULT; + rc = -EFAULT; + goto err; } - } - if ( iommu_pde_from_gfn(d, gfn, pt_mfn) || (pt_mfn[1] == 0) ) - { - spin_unlock(&hd->arch.mapping_lock); - AMD_IOMMU_DEBUG("Invalid IO pagetable entry gfn = %lx\n", gfn); - domain_crash(d); - return -EFAULT; - } + /* Install 4k mapping first */ + need_flush = set_iommu_pte_present(pt_mfn[1], gfn, mfn, + IOMMU_PAGING_MODE_LEVEL_1, + !!(flags & IOMMUF_writable), + !!(flags & IOMMUF_readable)); - /* Install 4k mapping first */ - need_flush = set_iommu_pte_present(pt_mfn[1], gfn, mfn, - IOMMU_PAGING_MODE_LEVEL_1, - !!(flags & IOMMUF_writable), - !!(flags & IOMMUF_readable)); + /* Do not increase pde count if io mapping has not been changed */ + if ( !need_flush ) + { + spin_unlock(&hd->arch.mapping_lock); + continue; + } - /* Do not increase pde count if io mapping has not been changed */ - if ( !need_flush ) - goto out; + /* 4K mapping for PV guests never changes, + * no need to flush if we trust non-present bits */ + if ( is_hvm_domain(d) ) + amd_iommu_flush_pages(d, gfn, 0); - /* 4K mapping for PV guests never changes, - * no need to flush if we trust non-present bits */ - if ( is_hvm_domain(d) ) - amd_iommu_flush_pages(d, gfn, 0); - - for ( merge_level = IOMMU_PAGING_MODE_LEVEL_2; - merge_level <= hd->arch.paging_mode; merge_level++ ) - { - if ( pt_mfn[merge_level] == 0 ) - break; - if ( !iommu_update_pde_count(d, pt_mfn[merge_level], - gfn, mfn, merge_level) ) - break; - - if ( iommu_merge_pages(d, pt_mfn[merge_level], gfn, - flags, merge_level) ) + for ( merge_level = IOMMU_PAGING_MODE_LEVEL_2; + merge_level <= hd->arch.paging_mode; merge_level++ ) { - spin_unlock(&hd->arch.mapping_lock); - AMD_IOMMU_DEBUG("Merge iommu page failed at level %d, " - "gfn = %lx mfn = %lx\n", merge_level, gfn, mfn); - domain_crash(d); - return -EFAULT; + if ( pt_mfn[merge_level] == 0 ) + break; + if ( !iommu_update_pde_count(d, pt_mfn[merge_level], + gfn, mfn, merge_level) ) + break; + + if ( iommu_merge_pages(d, pt_mfn[merge_level], gfn, + flags, merge_level) ) + { + spin_unlock(&hd->arch.mapping_lock); + AMD_IOMMU_DEBUG("Merge iommu page failed at level %d, " + "gfn = %lx mfn = %lx\n", merge_level, gfn, mfn); + domain_crash(d); + rc = -EFAULT; + goto err; + } + + /* Deallocate lower level page table */ + free_amd_iommu_pgtable(mfn_to_page(pt_mfn[merge_level - 1])); } - /* Deallocate lower level page table */ - free_amd_iommu_pgtable(mfn_to_page(pt_mfn[merge_level - 1])); + spin_unlock(&hd->arch.mapping_lock); } -out: - spin_unlock(&hd->arch.mapping_lock); return 0; + +err: + while ( i-- ) + /* If statement to satisfy __must_check. */ + if ( amd_iommu_unmap_pages(d, orig_gfn + i, 0) ) + continue; + + return rc; } -static int __must_check amd_iommu_unmap_page(struct domain *d, - unsigned long gfn) +int __must_check amd_iommu_unmap_pages(struct domain *d, + unsigned long gfn, + unsigned int order) { - unsigned long pt_mfn[7]; struct domain_iommu *hd = dom_iommu(d); + int rt = 0; + unsigned long i; if ( iommu_use_hap_pt(d) ) return 0; - memset(pt_mfn, 0, sizeof(pt_mfn)); - - spin_lock(&hd->arch.mapping_lock); - if ( !hd->arch.root_table ) - { - spin_unlock(&hd->arch.mapping_lock); return 0; - } - /* Since HVM domain is initialized with 2 level IO page table, - * we might need a deeper page table for lager gfn now */ - if ( is_hvm_domain(d) ) + for ( i = 0; i < (1UL << order); i++, gfn++ ) { - int rc = update_paging_mode(d, gfn); + unsigned long pt_mfn[7]; - if ( rc ) - { - spin_unlock(&hd->arch.mapping_lock); - AMD_IOMMU_DEBUG("Update page mode failed gfn = %lx\n", gfn); - if ( rc != -EADDRNOTAVAIL ) - domain_crash(d); - return rc; - } - } + memset(pt_mfn, 0, sizeof(pt_mfn)); - if ( iommu_pde_from_gfn(d, gfn, pt_mfn) || (pt_mfn[1] == 0) ) - { - spin_unlock(&hd->arch.mapping_lock); - AMD_IOMMU_DEBUG("Invalid IO pagetable entry gfn = %lx\n", gfn); - domain_crash(d); - return -EFAULT; - } - - /* mark PTE as 'page not present' */ - clear_iommu_pte_present(pt_mfn[1], gfn); - spin_unlock(&hd->arch.mapping_lock); + spin_lock(&hd->arch.mapping_lock); - amd_iommu_flush_pages(d, gfn, 0); - - return 0; -} - -/* TODO: Optimize by squashing map_pages/unmap_pages with map_page/unmap_page */ -int __must_check amd_iommu_map_pages(struct domain *d, unsigned long gfn, - unsigned long mfn, unsigned int order, - unsigned int flags) -{ - unsigned long i; - int rc = 0; - - for ( i = 0; i < (1UL << order); i++ ) - { - rc = amd_iommu_map_page(d, gfn + i, mfn + i, flags); - if ( unlikely(rc) ) + /* Since HVM domain is initialized with 2 level IO page table, + * we might need a deeper page table for lager gfn now */ + if ( is_hvm_domain(d) ) { - while ( i-- ) - /* If statement to satisfy __must_check. */ - if ( amd_iommu_unmap_page(d, gfn + i) ) - continue; + int rc = update_paging_mode(d, gfn); - break; + if ( rc ) + { + spin_unlock(&hd->arch.mapping_lock); + AMD_IOMMU_DEBUG("Update page mode failed gfn = %lx\n", gfn); + if ( rc != -EADDRNOTAVAIL ) + domain_crash(d); + if ( !rt ) + rt = rc; + continue; + } } - } - - return rc; -} -int __must_check amd_iommu_unmap_pages(struct domain *d, unsigned long gfn, - unsigned int order) -{ - unsigned long i; - int rc = 0; + if ( iommu_pde_from_gfn(d, gfn, pt_mfn) || (pt_mfn[1] == 0) ) + { + spin_unlock(&hd->arch.mapping_lock); + AMD_IOMMU_DEBUG("Invalid IO pagetable entry gfn = %lx\n", gfn); + domain_crash(d); + if ( !rt ) + rt = -EFAULT; + continue; + } - for ( i = 0; i < (1UL << order); i++ ) - { - int ret = amd_iommu_unmap_page(d, gfn + i); + /* mark PTE as 'page not present' */ + clear_iommu_pte_present(pt_mfn[1], gfn); + spin_unlock(&hd->arch.mapping_lock); - if ( !rc ) - rc = ret; + amd_iommu_flush_pages(d, gfn, 0); } - return rc; + return rt; } int amd_iommu_reserve_domain_unity_map(struct domain *domain, @@ -831,7 +823,7 @@ int amd_iommu_reserve_domain_unity_map(struct domain *domain, gfn = phys_addr >> PAGE_SHIFT; for ( i = 0; i < npages; i++ ) { - rt = amd_iommu_map_page(domain, gfn +i, gfn +i, flags); + rt = amd_iommu_map_pages(domain, gfn +i, gfn +i, flags, 0); if ( rt != 0 ) return rt; }