From patchwork Wed Nov 6 15:19:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11230599 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 19BD8139A for ; Wed, 6 Nov 2019 15:20:36 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E84732166E for ; Wed, 6 Nov 2019 15:20:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E84732166E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iSN5T-0003a0-FY; Wed, 06 Nov 2019 15:19:11 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iSN5S-0003Zs-N8 for xen-devel@lists.xenproject.org; Wed, 06 Nov 2019 15:19:10 +0000 X-Inumbo-ID: c6db8ec6-00a8-11ea-a1ad-12813bfff9fa Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id c6db8ec6-00a8-11ea-a1ad-12813bfff9fa; Wed, 06 Nov 2019 15:19:08 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 837A2B071; Wed, 6 Nov 2019 15:19:07 +0000 (UTC) From: Jan Beulich To: "xen-devel@lists.xenproject.org" References: Message-ID: <7045df66-009d-6c9f-8e8d-cfd058c29131@suse.com> Date: Wed, 6 Nov 2019 16:19:16 +0100 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.9.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US Subject: [Xen-devel] [PATCH 2/3] introduce GFN notification for translated domains X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Julien Grall , Wei Liu , Konrad Wilk , George Dunlap , Andrew Cooper , Sander Eikelenboom , Ian Jackson , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" In order for individual IOMMU drivers (and from an abstract pov also architectures) to be able to adjust their data structures ahead of time when they might cover only a sub-range of all possible GFNs, introduce a notification call used by various code paths potentially installing a fresh mapping of a never used GFN (for a particular domain). Note that in gnttab_transfer() the notification and lock re-acquire handling is best effort only (the guest may not be able to make use of the new page in case of failure, but that's in line with the lack of a return value check of guest_physmap_add_page() itself). Signed-off-by: Jan Beulich --- a/xen/arch/x86/hvm/dom0_build.c +++ b/xen/arch/x86/hvm/dom0_build.c @@ -173,7 +173,8 @@ static int __init pvh_populate_memory_ra continue; } - rc = guest_physmap_add_page(d, _gfn(start), page_to_mfn(page), + rc = notify_gfn(d, _gfn(start + (1UL << order) - 1)) ?: + guest_physmap_add_page(d, _gfn(start), page_to_mfn(page), order); if ( rc != 0 ) { --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -4286,9 +4286,17 @@ static int hvmop_set_param( if ( a.value > SHUTDOWN_MAX ) rc = -EINVAL; break; + case HVM_PARAM_IOREQ_SERVER_PFN: - d->arch.hvm.ioreq_gfn.base = a.value; + if ( d->arch.hvm.params[HVM_PARAM_NR_IOREQ_SERVER_PAGES] ) + rc = notify_gfn( + d, + _gfn(a.value + d->arch.hvm.params + [HVM_PARAM_NR_IOREQ_SERVER_PAGES] - 1)); + if ( !rc ) + d->arch.hvm.ioreq_gfn.base = a.value; break; + case HVM_PARAM_NR_IOREQ_SERVER_PAGES: { unsigned int i; @@ -4299,6 +4307,9 @@ static int hvmop_set_param( rc = -EINVAL; break; } + rc = notify_gfn(d, _gfn(d->arch.hvm.ioreq_gfn.base + a.value - 1)); + if ( rc ) + break; for ( i = 0; i < a.value; i++ ) set_bit(i, &d->arch.hvm.ioreq_gfn.mask); @@ -4312,7 +4323,11 @@ static int hvmop_set_param( BUILD_BUG_ON(HVM_PARAM_BUFIOREQ_PFN > sizeof(d->arch.hvm.ioreq_gfn.legacy_mask) * 8); if ( a.value ) - set_bit(a.index, &d->arch.hvm.ioreq_gfn.legacy_mask); + { + rc = notify_gfn(d, _gfn(a.value)); + if ( !rc ) + set_bit(a.index, &d->arch.hvm.ioreq_gfn.legacy_mask); + } break; case HVM_PARAM_X87_FIP_WIDTH: --- a/xen/common/grant_table.c +++ b/xen/common/grant_table.c @@ -946,6 +946,16 @@ map_grant_ref( return; } + if ( paging_mode_translate(ld) /* && (op->flags & GNTMAP_host_map) */ && + (rc = notify_gfn(ld, gaddr_to_gfn(op->host_addr))) ) + { + gdprintk(XENLOG_INFO, "notify(%"PRI_gfn") -> %d\n", + gfn_x(gaddr_to_gfn(op->host_addr)), rc); + op->status = GNTST_general_error; + return; + BUILD_BUG_ON(GNTST_okay); + } + if ( unlikely((rd = rcu_lock_domain_by_id(op->dom)) == NULL) ) { gdprintk(XENLOG_INFO, "Could not find domain %d\n", op->dom); @@ -2123,6 +2133,7 @@ gnttab_transfer( { bool_t okay; int rc; + gfn_t gfn; if ( i && hypercall_preempt_check() ) return i; @@ -2300,21 +2311,52 @@ gnttab_transfer( act = active_entry_acquire(e->grant_table, gop.ref); if ( evaluate_nospec(e->grant_table->gt_version == 1) ) + gfn = _gfn(shared_entry_v1(e->grant_table, gop.ref).frame); + else + gfn = _gfn(shared_entry_v2(e->grant_table, gop.ref).full_page.frame); + + if ( paging_mode_translate(e) ) { - grant_entry_v1_t *sha = &shared_entry_v1(e->grant_table, gop.ref); + gfn_t gfn2; + + active_entry_release(act); + grant_read_unlock(e->grant_table); + + rc = notify_gfn(e, gfn); + if ( rc ) + printk(XENLOG_G_WARNING + "%pd: gref %u: xfer GFN %"PRI_gfn" may be inaccessible (%d)\n", + e, gop.ref, gfn_x(gfn), rc); + + grant_read_lock(e->grant_table); + act = active_entry_acquire(e->grant_table, gop.ref); - guest_physmap_add_page(e, _gfn(sha->frame), mfn, 0); - if ( !paging_mode_translate(e) ) - sha->frame = mfn_x(mfn); + if ( evaluate_nospec(e->grant_table->gt_version == 1) ) + gfn2 = _gfn(shared_entry_v1(e->grant_table, gop.ref).frame); + else + gfn2 = _gfn(shared_entry_v2(e->grant_table, gop.ref). + full_page.frame); + + if ( !gfn_eq(gfn, gfn2) ) + { + printk(XENLOG_G_WARNING + "%pd: gref %u: xfer GFN went %"PRI_gfn" -> %"PRI_gfn"\n", + e, gop.ref, gfn_x(gfn), gfn_x(gfn2)); + gfn = gfn2; + } } - else - { - grant_entry_v2_t *sha = &shared_entry_v2(e->grant_table, gop.ref); - guest_physmap_add_page(e, _gfn(sha->full_page.frame), mfn, 0); - if ( !paging_mode_translate(e) ) - sha->full_page.frame = mfn_x(mfn); + guest_physmap_add_page(e, gfn, mfn, 0); + + if ( !paging_mode_translate(e) ) + { + if ( evaluate_nospec(e->grant_table->gt_version == 1) ) + shared_entry_v1(e->grant_table, gop.ref).frame = mfn_x(mfn); + else + shared_entry_v2(e->grant_table, gop.ref).full_page.frame = + mfn_x(mfn); } + smp_wmb(); shared_entry_header(e->grant_table, gop.ref)->flags |= GTF_transfer_completed; --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -203,6 +203,10 @@ static void populate_physmap(struct memo if ( unlikely(__copy_from_guest_offset(&gpfn, a->extent_list, i, 1)) ) goto out; + if ( paging_mode_translate(d) && + notify_gfn(d, _gfn(gpfn + (1U << a->extent_order) - 1)) ) + goto out; + if ( a->memflags & MEMF_populate_on_demand ) { /* Disallow populating PoD pages on oneself. */ @@ -745,6 +749,10 @@ static long memory_exchange(XEN_GUEST_HA continue; } + if ( paging_mode_translate(d) ) + rc = notify_gfn(d, + _gfn(gpfn + (1U << exch.out.extent_order) - 1)); + mfn = page_to_mfn(page); guest_physmap_add_page(d, _gfn(gpfn), mfn, exch.out.extent_order); @@ -813,12 +821,20 @@ int xenmem_add_to_physmap(struct domain extra.foreign_domid = DOMID_INVALID; if ( xatp->space != XENMAPSPACE_gmfn_range ) - return xenmem_add_to_physmap_one(d, xatp->space, extra, + return notify_gfn(d, _gfn(xatp->gpfn)) ?: + xenmem_add_to_physmap_one(d, xatp->space, extra, xatp->idx, _gfn(xatp->gpfn)); if ( xatp->size < start ) return -EILSEQ; + if ( !start && xatp->size ) + { + rc = notify_gfn(d, _gfn(xatp->gpfn + xatp->size - 1)); + if ( rc ) + return rc; + } + xatp->idx += start; xatp->gpfn += start; xatp->size -= start; @@ -891,7 +907,8 @@ static int xenmem_add_to_physmap_batch(s extent, 1)) ) return -EFAULT; - rc = xenmem_add_to_physmap_one(d, xatpb->space, + rc = notify_gfn(d, _gfn(gpfn)) ?: + xenmem_add_to_physmap_one(d, xatpb->space, xatpb->u, idx, _gfn(gpfn)); --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -530,6 +530,14 @@ void iommu_share_p2m_table(struct domain iommu_get_ops()->share_p2m(d); } +int iommu_notify_gfn(struct domain *d, gfn_t gfn) +{ + const struct iommu_ops *ops = dom_iommu(d)->platform_ops; + + return need_iommu_pt_sync(d) && ops->notify_dfn + ? iommu_call(ops, notify_dfn, d, _dfn(gfn_x(gfn))) : 0; +} + void iommu_crash_shutdown(void) { if ( !iommu_crash_disable ) --- a/xen/include/xen/iommu.h +++ b/xen/include/xen/iommu.h @@ -237,6 +237,8 @@ struct iommu_ops { int __must_check (*lookup_page)(struct domain *d, dfn_t dfn, mfn_t *mfn, unsigned int *flags); + int __must_check (*notify_dfn)(struct domain *d, dfn_t dfn); + void (*free_page_table)(struct page_info *); #ifdef CONFIG_X86 @@ -331,6 +333,7 @@ void iommu_crash_shutdown(void); int iommu_get_reserved_device_memory(iommu_grdm_t *, void *); void iommu_share_p2m_table(struct domain *d); +int __must_check iommu_notify_gfn(struct domain *d, gfn_t gfn); #ifdef CONFIG_HAS_PCI int iommu_do_pci_domctl(struct xen_domctl *, struct domain *d, --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -1039,6 +1039,11 @@ static always_inline bool is_iommu_enabl return evaluate_nospec(d->options & XEN_DOMCTL_CDF_iommu); } +static inline int __must_check notify_gfn(struct domain *d, gfn_t gfn) +{ + return /* arch_notify_gfn(d, gfn) ?: */ iommu_notify_gfn(d, gfn); +} + extern bool sched_smt_power_savings; extern bool sched_disable_smt_switching;