From patchwork Tue Jul 25 17:26:44 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Tyshchenko X-Patchwork-Id: 9862579 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id BE7E1601A1 for ; Tue, 25 Jul 2017 17:30:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AE72D286E3 for ; Tue, 25 Jul 2017 17:30:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A305627FB2; Tue, 25 Jul 2017 17:30:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.6 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, RCVD_IN_SORBS_SPAM, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 101C6286FF for ; Tue, 25 Jul 2017 17:30:04 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1da3c2-0008Rb-IR; Tue, 25 Jul 2017 17:27:14 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1da3c1-0008RQ-FH for xen-devel@lists.xenproject.org; Tue, 25 Jul 2017 17:27:13 +0000 Received: from [85.158.137.68] by server-7.bemta-3.messagelabs.com id 1C/4E-02177-07F77795; Tue, 25 Jul 2017 17:27:12 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrAIsWRWlGSWpSXmKPExsVyMfS6s25+fXm kweQd2hbft0xmcmD0OPzhCksAYxRrZl5SfkUCa8bvvTtYC45cYaxoetDL1sA4axJjFyMXh5DA dEaJ/xO+gjksAi9ZJP69+cMG4kgI9LNKtJ34xtzFyAnkxEm8/7SaCcKukHjxugUsLiSgJPF65 2YmiFGzmST+zFoFVsQmYCCx/90TdhBbBKjo3qrJYEXMAjeYJe7ffgfWLSyQJHF68Vk2EJtFQF Vi3aS3rCA2r4CzxMX/i6A2y0ncPNcJZnMKuEhs/bSNBWKzs8T6YweYJzAKLGBkWMWoUZxaVJZ apGtkqZdUlJmeUZKbmJmja2hgrJebWlycmJ6ak5hUrJecn7uJERhi9QwMjDsYm/b6HWKU5GBS EuX9plseKcSXlJ9SmZFYnBFfVJqTWnyIUYaDQ0mCd3kdUE6wKDU9tSItMwcY7DBpCQ4eJRHeO yBp3uKCxNzizHSI1ClGS44rV9Z9YeKYcmA7kHw14f83JiGWvPy8VClxXh+QBgGQhozSPLhxsI i8xCgrJczLyMDAIMRTkFqUm1mCKv+KUZyDUUmYNxFkCk9mXgnc1ldABzEBHTRnRinIQSWJCCm pBsa5DptFJZ3v5q1UVNr3SOfkgpzmv7/WF32fl5ttd7NVd86l4Ps7DL1i98xyl3l0V/8HZ9bu 1Yq3A53M+x45Hp2w/K98xXdm3zWreX8XlZ059D7L4IL17dlMr50vc91oWhNVfpTp6VOdx5xZ2 4RfbPzpkDtBNpN32t5pjszvGWe83OztKyJTfEJOiaU4I9FQi7moOBEAhakShMMCAAA= X-Env-Sender: olekstysh@gmail.com X-Msg-Ref: server-7.tower-31.messagelabs.com!1501003631!99519511!1 X-Originating-IP: [209.85.215.67] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 9.4.25; banners=-,-,- X-VirusChecked: Checked Received: (qmail 25143 invoked from network); 25 Jul 2017 17:27:11 -0000 Received: from mail-lf0-f67.google.com (HELO mail-lf0-f67.google.com) (209.85.215.67) by server-7.tower-31.messagelabs.com with AES128-GCM-SHA256 encrypted SMTP; 25 Jul 2017 17:27:11 -0000 Received: by mail-lf0-f67.google.com with SMTP id y15so5623517lfd.5 for ; Tue, 25 Jul 2017 10:27:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=oJPZiBvqxa0p8HRKBbGqVD4Tq7SYBlzeEQO5TmIz/Ig=; b=eQM6vieLMBqkorUa1kMAHDKvI6RAYHrHkjKp6wCyOKk5ruvQ0SbFIPhC7yUFCUNinI IG/UeqF4jHxjlPwNH2ZG0+hiBOSGRUmx00B7QvjVBN6Nqdfyf+JA8+DkrV3IHW5Hm1X9 mwLOk1o92U/ju5Xe+pMfIXYLMgMwDHSUTFkBAifZAYLY0h8Ya+9S/m8HXgKcwrU4D1mZ FjcByaXmYQ46ab8DO4oUxBdNXunLULnBCq+Y4HgrVNA+WWiZjFMnVvtKSYIx45Dn64Wm vWRYYtspq0K/yRqJb8h4J1h9v4TPDXQbT2dDqWlhNpkQD9OKGmHmD3UdB9g0tJbPvX1v bjDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=oJPZiBvqxa0p8HRKBbGqVD4Tq7SYBlzeEQO5TmIz/Ig=; b=AhSLWymUT+d3Y8eJTsAp7oEGKm4encOkHnkh4HlTDWuLt204mw6Dl8U+z0auh5H8Js W1yJkvST+d9XmfjQnlF6Xh/2n/+tHuDzGHIp8G1bgOy4eK5blSM9wNwXbtsE4Y0ETH2V m7DyAUmgPq+R2Ov1DMnz8AL4ACl2GTxYK4da+0toMYBUh2j9wIiw0wDYAENnfm0It9xD PaHxreNMMS4LGFaRnnOHTBuy3YgihLNX3iN4Y+N3PxEL/YkPNtUNXcZ0ay7RwJ4N8STs YzUMxu1tizD4PuRSysV9IP115QrrpugTbc9lluk7BWvyvXYnkUVTKcATrpG5i2a68//6 kUrw== X-Gm-Message-State: AIVw113COKPMfNVtWbPaZf3I/bCtwUyqAfiaVZtnshaINiFeF4iQL14g lVyIM3+q8VOY4oiC X-Received: by 10.25.56.83 with SMTP id d19mr6959474lfj.115.1501003630570; Tue, 25 Jul 2017 10:27:10 -0700 (PDT) Received: from otyshchenko.kyiv.epam.com (ll-59.209.223.85.sovam.net.ua. [85.223.209.59]) by smtp.gmail.com with ESMTPSA id v145sm411748lfa.21.2017.07.25.10.27.09 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 25 Jul 2017 10:27:09 -0700 (PDT) From: Oleksandr Tyshchenko To: xen-devel@lists.xenproject.org Date: Tue, 25 Jul 2017 20:26:44 +0300 Message-Id: <1501003615-15274-3-git-send-email-olekstysh@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1501003615-15274-1-git-send-email-olekstysh@gmail.com> References: <1501003615-15274-1-git-send-email-olekstysh@gmail.com> Cc: Kevin Tian , Stefano Stabellini , Wei Liu , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Oleksandr Tyshchenko , Julien Grall , Suravee Suthikulpanit Subject: [Xen-devel] [PATCH v2 02/13] iommu: Add extra order argument to the IOMMU APIs and platform callbacks X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Oleksandr Tyshchenko Replace existing single-page stuff (IOMMU APIs and platform callbacks) with the multi-page one followed by modifications of all related parts. These new map_pages/unmap_pages APIs do almost the same thing as old map_page/unmap_page ones except the formers have extra order argument and as the result can handle the number of pages. So have new platform callbacks. Although the current behavior was retained in all places (I hope), it should be noted that the rollback logic was moved from the common code to the IOMMU drivers. Now the IOMMU drivers are responsible for unmapping already mapped pages if something went wrong during mapping the number of pages (order > 0). Signed-off-by: Oleksandr Tyshchenko (only for x86 and generic parts) Reviewed-by/CC: Jan Beulich CC: Julien Grall CC: Kevin Tian CC: Suravee Suthikulpanit CC: Andrew Cooper CC: George Dunlap CC: Ian Jackson CC: Konrad Rzeszutek Wilk CC: Stefano Stabellini CC: Tim Deegan CC: Wei Liu --- Changes in v1: - Replace existing single-page IOMMU APIs/platform callbacks with multi-page ones instead of just keeping both variants of them. - Use order argument instead of page_count. - Clarify patch subject/description. Changes in v2: - Add maintainers in CC --- xen/arch/x86/mm.c | 11 +++--- xen/arch/x86/mm/p2m-ept.c | 21 ++--------- xen/arch/x86/mm/p2m-pt.c | 26 +++----------- xen/arch/x86/mm/p2m.c | 38 ++++---------------- xen/arch/x86/x86_64/mm.c | 5 +-- xen/common/grant_table.c | 10 +++--- xen/drivers/passthrough/amd/iommu_map.c | 50 +++++++++++++++++++++++++-- xen/drivers/passthrough/amd/pci_amd_iommu.c | 8 ++--- xen/drivers/passthrough/arm/smmu.c | 41 ++++++++++++++++++++-- xen/drivers/passthrough/iommu.c | 21 +++++------ xen/drivers/passthrough/vtd/iommu.c | 48 +++++++++++++++++++++++-- xen/drivers/passthrough/vtd/x86/vtd.c | 4 +-- xen/drivers/passthrough/x86/iommu.c | 6 ++-- xen/include/asm-x86/hvm/svm/amd-iommu-proto.h | 8 +++-- xen/include/xen/iommu.h | 20 ++++++----- 15 files changed, 196 insertions(+), 121 deletions(-) diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 2dc7db9..33fcffe 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -2623,11 +2623,14 @@ static int __get_page_type(struct page_info *page, unsigned long type, if ( d && is_pv_domain(d) && unlikely(need_iommu(d)) ) { if ( (x & PGT_type_mask) == PGT_writable_page ) - iommu_ret = iommu_unmap_page(d, mfn_to_gmfn(d, page_to_mfn(page))); + iommu_ret = iommu_unmap_pages(d, + mfn_to_gmfn(d, page_to_mfn(page)), + 0); else if ( type == PGT_writable_page ) - iommu_ret = iommu_map_page(d, mfn_to_gmfn(d, page_to_mfn(page)), - page_to_mfn(page), - IOMMUF_readable|IOMMUF_writable); + iommu_ret = iommu_map_pages(d, + mfn_to_gmfn(d, page_to_mfn(page)), + page_to_mfn(page), 0, + IOMMUF_readable|IOMMUF_writable); } } diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index ecab56f..0ccf451 100644 --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -870,26 +870,9 @@ out: else { if ( iommu_flags ) - for ( i = 0; i < (1 << order); i++ ) - { - rc = iommu_map_page(d, gfn + i, mfn_x(mfn) + i, iommu_flags); - if ( unlikely(rc) ) - { - while ( i-- ) - /* If statement to satisfy __must_check. */ - if ( iommu_unmap_page(p2m->domain, gfn + i) ) - continue; - - break; - } - } + rc = iommu_map_pages(d, gfn, mfn_x(mfn), order, iommu_flags); else - for ( i = 0; i < (1 << order); i++ ) - { - ret = iommu_unmap_page(d, gfn + i); - if ( !rc ) - rc = ret; - } + rc = iommu_unmap_pages(d, gfn, order); } } diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c index 06e64b8..b512ee3 100644 --- a/xen/arch/x86/mm/p2m-pt.c +++ b/xen/arch/x86/mm/p2m-pt.c @@ -514,7 +514,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn, { /* XXX -- this might be able to be faster iff current->domain == d */ void *table; - unsigned long i, gfn_remainder = gfn; + unsigned long gfn_remainder = gfn; l1_pgentry_t *p2m_entry, entry_content; /* Intermediate table to free if we're replacing it with a superpage. */ l1_pgentry_t intermediate_entry = l1e_empty(); @@ -722,28 +722,10 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn, amd_iommu_flush_pages(p2m->domain, gfn, page_order); } else if ( iommu_pte_flags ) - for ( i = 0; i < (1UL << page_order); i++ ) - { - rc = iommu_map_page(p2m->domain, gfn + i, mfn_x(mfn) + i, - iommu_pte_flags); - if ( unlikely(rc) ) - { - while ( i-- ) - /* If statement to satisfy __must_check. */ - if ( iommu_unmap_page(p2m->domain, gfn + i) ) - continue; - - break; - } - } + rc = iommu_map_pages(p2m->domain, gfn, mfn_x(mfn), page_order, + iommu_pte_flags); else - for ( i = 0; i < (1UL << page_order); i++ ) - { - int ret = iommu_unmap_page(p2m->domain, gfn + i); - - if ( !rc ) - rc = ret; - } + rc = iommu_unmap_pages(p2m->domain, gfn, page_order); } /* diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index ece32ff..18a71f8 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -708,20 +708,9 @@ p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn, unsigned long mfn, if ( !paging_mode_translate(p2m->domain) ) { - int rc = 0; - if ( need_iommu(p2m->domain) ) - { - for ( i = 0; i < (1 << page_order); i++ ) - { - int ret = iommu_unmap_page(p2m->domain, mfn + i); - - if ( !rc ) - rc = ret; - } - } - - return rc; + return iommu_unmap_pages(p2m->domain, mfn, page_order); + return 0; } ASSERT(gfn_locked_by_me(p2m, gfn)); @@ -768,23 +757,8 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn, if ( !paging_mode_translate(d) ) { if ( need_iommu(d) && t == p2m_ram_rw ) - { - for ( i = 0; i < (1 << page_order); i++ ) - { - rc = iommu_map_page(d, mfn_x(mfn_add(mfn, i)), - mfn_x(mfn_add(mfn, i)), - IOMMUF_readable|IOMMUF_writable); - if ( rc != 0 ) - { - while ( i-- > 0 ) - /* If statement to satisfy __must_check. */ - if ( iommu_unmap_page(d, mfn_x(mfn_add(mfn, i))) ) - continue; - - return rc; - } - } - } + return iommu_map_pages(d, mfn_x(mfn), mfn_x(mfn), page_order, + IOMMUF_readable|IOMMUF_writable); return 0; } @@ -1148,7 +1122,7 @@ int set_identity_p2m_entry(struct domain *d, unsigned long gfn, { if ( !need_iommu(d) ) return 0; - return iommu_map_page(d, gfn, gfn, IOMMUF_readable|IOMMUF_writable); + return iommu_map_pages(d, gfn, gfn, 0, IOMMUF_readable|IOMMUF_writable); } gfn_lock(p2m, gfn, 0); @@ -1236,7 +1210,7 @@ int clear_identity_p2m_entry(struct domain *d, unsigned long gfn) { if ( !need_iommu(d) ) return 0; - return iommu_unmap_page(d, gfn); + return iommu_unmap_pages(d, gfn, 0); } gfn_lock(p2m, gfn, 0); diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c index aa1b94f..5fd1d4c 100644 --- a/xen/arch/x86/x86_64/mm.c +++ b/xen/arch/x86/x86_64/mm.c @@ -1442,13 +1442,14 @@ int memory_add(unsigned long spfn, unsigned long epfn, unsigned int pxm) if ( iommu_enabled && !iommu_passthrough && !need_iommu(hardware_domain) ) { for ( i = spfn; i < epfn; i++ ) - if ( iommu_map_page(hardware_domain, i, i, IOMMUF_readable|IOMMUF_writable) ) + if ( iommu_map_pages(hardware_domain, i, i, 0, + IOMMUF_readable|IOMMUF_writable) ) break; if ( i != epfn ) { while (i-- > old_max) /* If statement to satisfy __must_check. */ - if ( iommu_unmap_page(hardware_domain, i) ) + if ( iommu_unmap_pages(hardware_domain, i, 0) ) continue; goto destroy_m2p; diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c index 03de2be..5399c36 100644 --- a/xen/common/grant_table.c +++ b/xen/common/grant_table.c @@ -987,13 +987,13 @@ __gnttab_map_grant_ref( !(old_pin & (GNTPIN_hstw_mask|GNTPIN_devw_mask)) ) { if ( !(kind & MAPKIND_WRITE) ) - err = iommu_map_page(ld, frame, frame, - IOMMUF_readable|IOMMUF_writable); + err = iommu_map_pages(ld, frame, frame, 0, + IOMMUF_readable|IOMMUF_writable); } else if ( act_pin && !old_pin ) { if ( !kind ) - err = iommu_map_page(ld, frame, frame, IOMMUF_readable); + err = iommu_map_pages(ld, frame, frame, 0, IOMMUF_readable); } if ( err ) { @@ -1248,9 +1248,9 @@ __gnttab_unmap_common( kind = mapkind(lgt, rd, op->frame); if ( !kind ) - err = iommu_unmap_page(ld, op->frame); + err = iommu_unmap_pages(ld, op->frame, 0); else if ( !(kind & MAPKIND_WRITE) ) - err = iommu_map_page(ld, op->frame, op->frame, IOMMUF_readable); + err = iommu_map_pages(ld, op->frame, op->frame, 0, IOMMUF_readable); double_gt_unlock(lgt, rgt); diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c index fd2327d..ea3a728 100644 --- a/xen/drivers/passthrough/amd/iommu_map.c +++ b/xen/drivers/passthrough/amd/iommu_map.c @@ -631,8 +631,9 @@ static int update_paging_mode(struct domain *d, unsigned long gfn) return 0; } -int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn, - unsigned int flags) +static int __must_check amd_iommu_map_page(struct domain *d, unsigned long gfn, + unsigned long mfn, + unsigned int flags) { bool_t need_flush = 0; struct domain_iommu *hd = dom_iommu(d); @@ -720,7 +721,8 @@ out: return 0; } -int amd_iommu_unmap_page(struct domain *d, unsigned long gfn) +static int __must_check amd_iommu_unmap_page(struct domain *d, + unsigned long gfn) { unsigned long pt_mfn[7]; struct domain_iommu *hd = dom_iommu(d); @@ -771,6 +773,48 @@ int amd_iommu_unmap_page(struct domain *d, unsigned long gfn) return 0; } +/* TODO: Optimize by squashing map_pages/unmap_pages with map_page/unmap_page */ +int __must_check amd_iommu_map_pages(struct domain *d, unsigned long gfn, + unsigned long mfn, unsigned int order, + unsigned int flags) +{ + unsigned long i; + int rc = 0; + + for ( i = 0; i < (1UL << order); i++ ) + { + rc = amd_iommu_map_page(d, gfn + i, mfn + i, flags); + if ( unlikely(rc) ) + { + while ( i-- ) + /* If statement to satisfy __must_check. */ + if ( amd_iommu_unmap_page(d, gfn + i) ) + continue; + + break; + } + } + + return rc; +} + +int __must_check amd_iommu_unmap_pages(struct domain *d, unsigned long gfn, + unsigned int order) +{ + unsigned long i; + int rc = 0; + + for ( i = 0; i < (1UL << order); i++ ) + { + int ret = amd_iommu_unmap_page(d, gfn + i); + + if ( !rc ) + rc = ret; + } + + return rc; +} + int amd_iommu_reserve_domain_unity_map(struct domain *domain, u64 phys_addr, unsigned long size, int iw, int ir) diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c index 8c25110..fe744d2 100644 --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c @@ -296,8 +296,8 @@ static void __hwdom_init amd_iommu_hwdom_init(struct domain *d) */ if ( mfn_valid(_mfn(pfn)) ) { - int ret = amd_iommu_map_page(d, pfn, pfn, - IOMMUF_readable|IOMMUF_writable); + int ret = amd_iommu_map_pages(d, pfn, pfn, 0, + IOMMUF_readable|IOMMUF_writable); if ( !rc ) rc = ret; @@ -620,8 +620,8 @@ const struct iommu_ops amd_iommu_ops = { .remove_device = amd_iommu_remove_device, .assign_device = amd_iommu_assign_device, .teardown = amd_iommu_domain_destroy, - .map_page = amd_iommu_map_page, - .unmap_page = amd_iommu_unmap_page, + .map_pages = amd_iommu_map_pages, + .unmap_pages = amd_iommu_unmap_pages, .free_page_table = deallocate_page_table, .reassign_device = reassign_device, .get_device_group_id = amd_iommu_group_id, diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c index 74c09b0..7c313c0 100644 --- a/xen/drivers/passthrough/arm/smmu.c +++ b/xen/drivers/passthrough/arm/smmu.c @@ -2778,6 +2778,43 @@ static int __must_check arm_smmu_unmap_page(struct domain *d, unsigned long gfn) return guest_physmap_remove_page(d, _gfn(gfn), _mfn(gfn), 0); } +/* TODO: Optimize by squashing map_pages/unmap_pages with map_page/unmap_page */ +static int __must_check arm_smmu_map_pages(struct domain *d, unsigned long gfn, + unsigned long mfn, unsigned int order, unsigned int flags) +{ + unsigned long i; + int rc = 0; + + for (i = 0; i < (1UL << order); i++) { + rc = arm_smmu_map_page(d, gfn + i, mfn + i, flags); + if (unlikely(rc)) { + while (i--) + /* If statement to satisfy __must_check. */ + if (arm_smmu_unmap_page(d, gfn + i)) + continue; + + break; + } + } + + return rc; +} + +static int __must_check arm_smmu_unmap_pages(struct domain *d, + unsigned long gfn, unsigned int order) +{ + unsigned long i; + int rc = 0; + + for (i = 0; i < (1UL << order); i++) { + int ret = arm_smmu_unmap_page(d, gfn + i); + if (!rc) + rc = ret; + } + + return rc; +} + static const struct iommu_ops arm_smmu_iommu_ops = { .init = arm_smmu_iommu_domain_init, .hwdom_init = arm_smmu_iommu_hwdom_init, @@ -2786,8 +2823,8 @@ static const struct iommu_ops arm_smmu_iommu_ops = { .iotlb_flush_all = arm_smmu_iotlb_flush_all, .assign_device = arm_smmu_assign_dev, .reassign_device = arm_smmu_reassign_dev, - .map_page = arm_smmu_map_page, - .unmap_page = arm_smmu_unmap_page, + .map_pages = arm_smmu_map_pages, + .unmap_pages = arm_smmu_unmap_pages, }; static __init const struct arm_smmu_device *find_smmu(const struct device *dev) diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c index 5e81813..3e9e4c3 100644 --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -188,7 +188,7 @@ void __hwdom_init iommu_hwdom_init(struct domain *d) == PGT_writable_page) ) mapping |= IOMMUF_writable; - ret = hd->platform_ops->map_page(d, gfn, mfn, mapping); + ret = hd->platform_ops->map_pages(d, gfn, mfn, 0, mapping); if ( !rc ) rc = ret; @@ -249,8 +249,8 @@ void iommu_domain_destroy(struct domain *d) arch_iommu_domain_destroy(d); } -int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn, - unsigned int flags) +int iommu_map_pages(struct domain *d, unsigned long gfn, unsigned long mfn, + unsigned int order, unsigned int flags) { const struct domain_iommu *hd = dom_iommu(d); int rc; @@ -258,13 +258,13 @@ int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn, if ( !iommu_enabled || !hd->platform_ops ) return 0; - rc = hd->platform_ops->map_page(d, gfn, mfn, flags); + rc = hd->platform_ops->map_pages(d, gfn, mfn, order, flags); if ( unlikely(rc) ) { if ( !d->is_shutting_down && printk_ratelimit() ) printk(XENLOG_ERR - "d%d: IOMMU mapping gfn %#lx to mfn %#lx failed: %d\n", - d->domain_id, gfn, mfn, rc); + "d%d: IOMMU mapping gfn %#lx to mfn %#lx order %u failed: %d\n", + d->domain_id, gfn, mfn, order, rc); if ( !is_hardware_domain(d) ) domain_crash(d); @@ -273,7 +273,8 @@ int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn, return rc; } -int iommu_unmap_page(struct domain *d, unsigned long gfn) +int iommu_unmap_pages(struct domain *d, unsigned long gfn, + unsigned int order) { const struct domain_iommu *hd = dom_iommu(d); int rc; @@ -281,13 +282,13 @@ int iommu_unmap_page(struct domain *d, unsigned long gfn) if ( !iommu_enabled || !hd->platform_ops ) return 0; - rc = hd->platform_ops->unmap_page(d, gfn); + rc = hd->platform_ops->unmap_pages(d, gfn, order); if ( unlikely(rc) ) { if ( !d->is_shutting_down && printk_ratelimit() ) printk(XENLOG_ERR - "d%d: IOMMU unmapping gfn %#lx failed: %d\n", - d->domain_id, gfn, rc); + "d%d: IOMMU unmapping gfn %#lx order %u failed: %d\n", + d->domain_id, gfn, order, rc); if ( !is_hardware_domain(d) ) domain_crash(d); diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c index 19328f6..b4e8c89 100644 --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -1816,6 +1816,50 @@ static int __must_check intel_iommu_unmap_page(struct domain *d, return dma_pte_clear_one(d, (paddr_t)gfn << PAGE_SHIFT_4K); } +/* TODO: Optimize by squashing map_pages/unmap_pages with map_page/unmap_page */ +static int __must_check intel_iommu_map_pages(struct domain *d, + unsigned long gfn, + unsigned long mfn, + unsigned int order, + unsigned int flags) +{ + unsigned long i; + int rc = 0; + + for ( i = 0; i < (1UL << order); i++ ) + { + rc = intel_iommu_map_page(d, gfn + i, mfn + i, flags); + if ( unlikely(rc) ) + { + while ( i-- ) + /* If statement to satisfy __must_check. */ + if ( intel_iommu_unmap_page(d, gfn + i) ) + continue; + + break; + } + } + + return rc; +} + +static int __must_check intel_iommu_unmap_pages(struct domain *d, + unsigned long gfn, + unsigned int order) +{ + unsigned long i; + int rc = 0; + + for ( i = 0; i < (1UL << order); i++ ) + { + int ret = intel_iommu_unmap_page(d, gfn + i); + if ( !rc ) + rc = ret; + } + + return rc; +} + int iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, int order, int present) { @@ -2639,8 +2683,8 @@ const struct iommu_ops intel_iommu_ops = { .remove_device = intel_iommu_remove_device, .assign_device = intel_iommu_assign_device, .teardown = iommu_domain_teardown, - .map_page = intel_iommu_map_page, - .unmap_page = intel_iommu_unmap_page, + .map_pages = intel_iommu_map_pages, + .unmap_pages = intel_iommu_unmap_pages, .free_page_table = iommu_free_page_table, .reassign_device = reassign_device_ownership, .get_device_group_id = intel_iommu_group_id, diff --git a/xen/drivers/passthrough/vtd/x86/vtd.c b/xen/drivers/passthrough/vtd/x86/vtd.c index 88a60b3..62a6ee6 100644 --- a/xen/drivers/passthrough/vtd/x86/vtd.c +++ b/xen/drivers/passthrough/vtd/x86/vtd.c @@ -143,8 +143,8 @@ void __hwdom_init vtd_set_hwdom_mapping(struct domain *d) tmp = 1 << (PAGE_SHIFT - PAGE_SHIFT_4K); for ( j = 0; j < tmp; j++ ) { - int ret = iommu_map_page(d, pfn * tmp + j, pfn * tmp + j, - IOMMUF_readable|IOMMUF_writable); + int ret = iommu_map_pages(d, pfn * tmp + j, pfn * tmp + j, 0, + IOMMUF_readable|IOMMUF_writable); if ( !rc ) rc = ret; diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c index 0253823..973b72f 100644 --- a/xen/drivers/passthrough/x86/iommu.c +++ b/xen/drivers/passthrough/x86/iommu.c @@ -65,9 +65,9 @@ int arch_iommu_populate_page_table(struct domain *d) { ASSERT(!(gfn >> DEFAULT_DOMAIN_ADDRESS_WIDTH)); BUG_ON(SHARED_M2P(gfn)); - rc = hd->platform_ops->map_page(d, gfn, mfn, - IOMMUF_readable | - IOMMUF_writable); + rc = hd->platform_ops->map_pages(d, gfn, mfn, 0, + IOMMUF_readable | + IOMMUF_writable); } if ( rc ) { diff --git a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h index 99bc21c..8f44489 100644 --- a/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h +++ b/xen/include/asm-x86/hvm/svm/amd-iommu-proto.h @@ -52,9 +52,11 @@ int amd_iommu_init(void); int amd_iommu_update_ivrs_mapping_acpi(void); /* mapping functions */ -int __must_check amd_iommu_map_page(struct domain *d, unsigned long gfn, - unsigned long mfn, unsigned int flags); -int __must_check amd_iommu_unmap_page(struct domain *d, unsigned long gfn); +int __must_check amd_iommu_map_pages(struct domain *d, unsigned long gfn, + unsigned long mfn, unsigned int order, + unsigned int flags); +int __must_check amd_iommu_unmap_pages(struct domain *d, unsigned long gfn, + unsigned int order); u64 amd_iommu_get_next_table_from_pte(u32 *entry); int __must_check amd_iommu_alloc_root(struct domain_iommu *hd); int amd_iommu_reserve_domain_unity_map(struct domain *domain, diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h index 5803e3f..3297998 100644 --- a/xen/include/xen/iommu.h +++ b/xen/include/xen/iommu.h @@ -71,14 +71,16 @@ int iommu_construct(struct domain *d); /* Function used internally, use iommu_domain_destroy */ void iommu_teardown(struct domain *d); -/* iommu_map_page() takes flags to direct the mapping operation. */ +/* iommu_map_pages() takes flags to direct the mapping operation. */ #define _IOMMUF_readable 0 #define IOMMUF_readable (1u<<_IOMMUF_readable) #define _IOMMUF_writable 1 #define IOMMUF_writable (1u<<_IOMMUF_writable) -int __must_check iommu_map_page(struct domain *d, unsigned long gfn, - unsigned long mfn, unsigned int flags); -int __must_check iommu_unmap_page(struct domain *d, unsigned long gfn); +int __must_check iommu_map_pages(struct domain *d, unsigned long gfn, + unsigned long mfn, unsigned int order, + unsigned int flags); +int __must_check iommu_unmap_pages(struct domain *d, unsigned long gfn, + unsigned int order); enum iommu_feature { @@ -168,9 +170,11 @@ struct iommu_ops { #endif /* HAS_PCI */ void (*teardown)(struct domain *d); - int __must_check (*map_page)(struct domain *d, unsigned long gfn, - unsigned long mfn, unsigned int flags); - int __must_check (*unmap_page)(struct domain *d, unsigned long gfn); + int __must_check (*map_pages)(struct domain *d, unsigned long gfn, + unsigned long mfn, unsigned int order, + unsigned int flags); + int __must_check (*unmap_pages)(struct domain *d, unsigned long gfn, + unsigned int order); void (*free_page_table)(struct page_info *); #ifdef CONFIG_X86 void (*update_ire_from_apic)(unsigned int apic, unsigned int reg, unsigned int value); @@ -213,7 +217,7 @@ void iommu_dev_iotlb_flush_timeout(struct domain *d, struct pci_dev *pdev); * The purpose of the iommu_dont_flush_iotlb optional cpu flag is to * avoid unecessary iotlb_flush in the low level IOMMU code. * - * iommu_map_page/iommu_unmap_page must flush the iotlb but somethimes + * iommu_map_pages/iommu_unmap_pages must flush the iotlb but somethimes * this operation can be really expensive. This flag will be set by the * caller to notify the low level IOMMU code to avoid the iotlb flushes. * iommu_iotlb_flush/iommu_iotlb_flush_all will be explicitly called by