From patchwork Mon Jan 27 18:11:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 11353069 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4858F92A for ; Mon, 27 Jan 2020 18:12:50 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 191D02087F for ; Mon, 27 Jan 2020 18:12:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="L6qsZBPE" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 191D02087F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iw8rH-0007tx-V9; Mon, 27 Jan 2020 18:11:35 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iw8rG-0007tV-W1 for xen-devel@lists.xenproject.org; Mon, 27 Jan 2020 18:11:35 +0000 X-Inumbo-ID: 717153bc-4130-11ea-8590-12813bfff9fa Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 717153bc-4130-11ea-8590-12813bfff9fa; Mon, 27 Jan 2020 18:11:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1580148691; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4IrhZy3TJurc3YdlOXeix0dvMd6yAgAHgdoYd1jCw8A=; b=L6qsZBPEEBOapyLr/Bawy73f+RGD1SrIEEvrolhLm5Zc9Dx39B6PiC3Z WHJguRH4JVjlupmqqWob6SPtgjPD3PwdIPaha9kfZNvAJIH7p2c1afW67 bEcFfUqLjZQ4fD2sCKuk96QutiFtQCW3duHbo6b7IeMZSGPfjFWRwHxOI Y=; Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: 92GTHQogjf6mfJI5Lj9YxgGU421ow+XwT0fxbFTkGzpumJWemu5AuwlFDzeHdurVHNhXtobr4k g8Th6kTY2CwcLynEtminWd2piKeKLLkg39rsqIE50ElJ+h5mOX/KGCpZZKHDdPimhimQCTIZDq x2F89p9lfW6ordHPniMMOPev0lD8kYLnS+ceM182sTEZJU4uwqDMkfBkgM5n8QPu2Fi5TIOvN5 4Sus5Ql2SJOpyckyZOEu1WmbfBYm16havZXRIpsGCtdezJ5SU5Pfs9YDhRdr9Qd2IFY9n9omsZ Wog= X-SBRS: 2.7 X-MesageID: 11501352 X-Ironport-Server: esa3.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,370,1574139600"; d="scan'208";a="11501352" From: Roger Pau Monne To: Date: Mon, 27 Jan 2020 19:11:11 +0100 Message-ID: <20200127181115.82709-4-roger.pau@citrix.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200127181115.82709-1-roger.pau@citrix.com> References: <20200127181115.82709-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3 3/7] x86/paging: add TLB flush hooks X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Wei Liu , George Dunlap , Andrew Cooper , Tim Deegan , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Add shadow and hap implementation specific helpers to perform guest TLB flushes. Note that the code for both is exactly the same at the moment, and is copied from hvm_flush_vcpu_tlb. This will be changed by further patches that will add implementation specific optimizations to them. No functional change intended. Signed-off-by: Roger Pau Monné Reviewed-by: Wei Liu --- xen/arch/x86/hvm/hvm.c | 51 ++---------------------------- xen/arch/x86/mm/hap/hap.c | 54 ++++++++++++++++++++++++++++++++ xen/arch/x86/mm/shadow/common.c | 55 +++++++++++++++++++++++++++++++++ xen/arch/x86/mm/shadow/multi.c | 1 - xen/include/asm-x86/hap.h | 3 ++ xen/include/asm-x86/shadow.h | 12 +++++++ 6 files changed, 127 insertions(+), 49 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 0b93609a82..96c419f0ef 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -3986,55 +3986,10 @@ static void hvm_s3_resume(struct domain *d) bool hvm_flush_vcpu_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), void *ctxt) { - static DEFINE_PER_CPU(cpumask_t, flush_cpumask); - cpumask_t *mask = &this_cpu(flush_cpumask); - struct domain *d = current->domain; - struct vcpu *v; - - /* Avoid deadlock if more than one vcpu tries this at the same time. */ - if ( !spin_trylock(&d->hypercall_deadlock_mutex) ) - return false; - - /* Pause all other vcpus. */ - for_each_vcpu ( d, v ) - if ( v != current && flush_vcpu(ctxt, v) ) - vcpu_pause_nosync(v); - - /* Now that all VCPUs are signalled to deschedule, we wait... */ - for_each_vcpu ( d, v ) - if ( v != current && flush_vcpu(ctxt, v) ) - while ( !vcpu_runnable(v) && v->is_running ) - cpu_relax(); - - /* All other vcpus are paused, safe to unlock now. */ - spin_unlock(&d->hypercall_deadlock_mutex); - - cpumask_clear(mask); - - /* Flush paging-mode soft state (e.g., va->gfn cache; PAE PDPE cache). */ - for_each_vcpu ( d, v ) - { - unsigned int cpu; - - if ( !flush_vcpu(ctxt, v) ) - continue; - - paging_update_cr3(v, false); + struct domain *currd = current->domain; - cpu = read_atomic(&v->dirty_cpu); - if ( is_vcpu_dirty_cpu(cpu) ) - __cpumask_set_cpu(cpu, mask); - } - - /* Flush TLBs on all CPUs with dirty vcpu state. */ - flush_tlb_mask(mask); - - /* Done. */ - for_each_vcpu ( d, v ) - if ( v != current && flush_vcpu(ctxt, v) ) - vcpu_unpause(v); - - return true; + return shadow_mode_enabled(currd) ? shadow_flush_tlb(flush_vcpu, ctxt) + : hap_flush_tlb(flush_vcpu, ctxt); } static bool always_flush(void *ctxt, struct vcpu *v) diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c index 3d93f3451c..6894c1aa38 100644 --- a/xen/arch/x86/mm/hap/hap.c +++ b/xen/arch/x86/mm/hap/hap.c @@ -669,6 +669,60 @@ static void hap_update_cr3(struct vcpu *v, int do_locking, bool noflush) hvm_update_guest_cr3(v, noflush); } +bool hap_flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), + void *ctxt) +{ + static DEFINE_PER_CPU(cpumask_t, flush_cpumask); + cpumask_t *mask = &this_cpu(flush_cpumask); + struct domain *d = current->domain; + struct vcpu *v; + + /* Avoid deadlock if more than one vcpu tries this at the same time. */ + if ( !spin_trylock(&d->hypercall_deadlock_mutex) ) + return false; + + /* Pause all other vcpus. */ + for_each_vcpu ( d, v ) + if ( v != current && flush_vcpu(ctxt, v) ) + vcpu_pause_nosync(v); + + /* Now that all VCPUs are signalled to deschedule, we wait... */ + for_each_vcpu ( d, v ) + if ( v != current && flush_vcpu(ctxt, v) ) + while ( !vcpu_runnable(v) && v->is_running ) + cpu_relax(); + + /* All other vcpus are paused, safe to unlock now. */ + spin_unlock(&d->hypercall_deadlock_mutex); + + cpumask_clear(mask); + + /* Flush paging-mode soft state (e.g., va->gfn cache; PAE PDPE cache). */ + for_each_vcpu ( d, v ) + { + unsigned int cpu; + + if ( !flush_vcpu(ctxt, v) ) + continue; + + paging_update_cr3(v, false); + + cpu = read_atomic(&v->dirty_cpu); + if ( is_vcpu_dirty_cpu(cpu) ) + __cpumask_set_cpu(cpu, mask); + } + + /* Flush TLBs on all CPUs with dirty vcpu state. */ + flush_tlb_mask(mask); + + /* Done. */ + for_each_vcpu ( d, v ) + if ( v != current && flush_vcpu(ctxt, v) ) + vcpu_unpause(v); + + return true; +} + const struct paging_mode * hap_paging_get_mode(struct vcpu *v) { diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c index 6212ec2c4a..ee90e55b41 100644 --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -3357,6 +3357,61 @@ out: return rc; } +/* Fluhs TLB of selected vCPUs. */ +bool shadow_flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), + void *ctxt) +{ + static DEFINE_PER_CPU(cpumask_t, flush_cpumask); + cpumask_t *mask = &this_cpu(flush_cpumask); + struct domain *d = current->domain; + struct vcpu *v; + + /* Avoid deadlock if more than one vcpu tries this at the same time. */ + if ( !spin_trylock(&d->hypercall_deadlock_mutex) ) + return false; + + /* Pause all other vcpus. */ + for_each_vcpu ( d, v ) + if ( v != current && flush_vcpu(ctxt, v) ) + vcpu_pause_nosync(v); + + /* Now that all VCPUs are signalled to deschedule, we wait... */ + for_each_vcpu ( d, v ) + if ( v != current && flush_vcpu(ctxt, v) ) + while ( !vcpu_runnable(v) && v->is_running ) + cpu_relax(); + + /* All other vcpus are paused, safe to unlock now. */ + spin_unlock(&d->hypercall_deadlock_mutex); + + cpumask_clear(mask); + + /* Flush paging-mode soft state (e.g., va->gfn cache; PAE PDPE cache). */ + for_each_vcpu ( d, v ) + { + unsigned int cpu; + + if ( !flush_vcpu(ctxt, v) ) + continue; + + paging_update_cr3(v, false); + + cpu = read_atomic(&v->dirty_cpu); + if ( is_vcpu_dirty_cpu(cpu) ) + __cpumask_set_cpu(cpu, mask); + } + + /* Flush TLBs on all CPUs with dirty vcpu state. */ + flush_tlb_mask(mask); + + /* Done. */ + for_each_vcpu ( d, v ) + if ( v != current && flush_vcpu(ctxt, v) ) + vcpu_unpause(v); + + return true; +} + /**************************************************************************/ /* Shadow-control XEN_DOMCTL dispatcher */ diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c index 26798b317c..dfe264cf83 100644 --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -4157,7 +4157,6 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool noflush) if ( do_locking ) paging_unlock(v->domain); } - /**************************************************************************/ /* Functions to revoke guest rights */ diff --git a/xen/include/asm-x86/hap.h b/xen/include/asm-x86/hap.h index b94bfb4ed0..0c6aa26b9b 100644 --- a/xen/include/asm-x86/hap.h +++ b/xen/include/asm-x86/hap.h @@ -46,6 +46,9 @@ int hap_track_dirty_vram(struct domain *d, extern const struct paging_mode *hap_paging_get_mode(struct vcpu *); int hap_set_allocation(struct domain *d, unsigned int pages, bool *preempted); +bool hap_flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), + void *ctxt); + #endif /* XEN_HAP_H */ /* diff --git a/xen/include/asm-x86/shadow.h b/xen/include/asm-x86/shadow.h index 907c71f497..3c1f6df478 100644 --- a/xen/include/asm-x86/shadow.h +++ b/xen/include/asm-x86/shadow.h @@ -95,6 +95,10 @@ void shadow_blow_tables_per_domain(struct domain *d); int shadow_set_allocation(struct domain *d, unsigned int pages, bool *preempted); +/* Flush the TLB of the selected vCPUs. */ +bool shadow_flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), + void *ctxt); + #else /* !CONFIG_SHADOW_PAGING */ #define shadow_teardown(d, p) ASSERT(is_pv_domain(d)) @@ -106,6 +110,14 @@ int shadow_set_allocation(struct domain *d, unsigned int pages, #define shadow_set_allocation(d, pages, preempted) \ ({ ASSERT_UNREACHABLE(); -EOPNOTSUPP; }) +static inline bool shadow_flush_tlb(bool (*flush_vcpu)(void *ctxt, + struct vcpu *v), + void *ctxt) +{ + ASSERT_UNREACHABLE(); + return -EOPNOTSUPP; +} + static inline void sh_remove_shadows(struct domain *d, mfn_t gmfn, int fast, int all) {}