From patchwork Tue Dec 24 13:26:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 11309337 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 460F8921 for ; Tue, 24 Dec 2019 13:28:01 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2281120643 for ; Tue, 24 Dec 2019 13:28:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="BzIaSZro" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2281120643 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ijkD3-0007c3-Om; Tue, 24 Dec 2019 13:26:49 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ijkD2-0007bx-Dd for xen-devel@lists.xenproject.org; Tue, 24 Dec 2019 13:26:48 +0000 X-Inumbo-ID: 01d479fc-2651-11ea-a1e1-bc764e2007e4 Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 01d479fc-2651-11ea-a1e1-bc764e2007e4; Tue, 24 Dec 2019 13:26:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1577193995; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ILxDEKri0JABc13Mus7DeHu3fI8SI/vl6MDDKTElL6k=; b=BzIaSZro3BFtsPyVodMlO9xXxbkqMvtCF+RiSEpjTWUeGKCs48PFDAsD zTSwDh+R8LA1I+g4l0qZONhb2OJ8BsMB7JPiflWj3l5wyp8oS8OjLX3Hj oqPe9cqOu5I0Z3H0l4SM2NNqsRcawD90JVpuuJTJQnTLQVex1pmB8cMov k=; Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: 9BhStHHStYFRC7DMPum2rsiCs9YMpQtSOLoolAH+fbviWDlZmDuEsyVthidANrzCwhQY816dsa mFDGEjkXaPPj6xBmrUb5yqB6Lw5Z78e+V73pvEeqbQKhIgYpbUHDpZwR5OcKTnmxU2dZldac4+ XtsFOLoEQHNS7AE4tvqaRXuu/K4Vd812hvNnXjpfkgdsLCn9cMR2EInY+jIj2XrGkPM4oQFrC0 DsGqQnrvkROxGrMe8DDZUUkt9Ad6wMZ0I8hHnVb4+qC0KwDGgdzWm48l90Q3FRuAoC/hAKPB3u R48= X-SBRS: 2.7 X-MesageID: 10478268 X-Ironport-Server: esa5.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.69,351,1571716800"; d="scan'208";a="10478268" From: Roger Pau Monne To: Date: Tue, 24 Dec 2019 14:26:15 +0100 Message-ID: <20191224132616.47441-2-roger.pau@citrix.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20191224132616.47441-1-roger.pau@citrix.com> References: <20191224132616.47441-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH 1/2] x86/hvm: improve performance of HVMOP_flush_tlbs X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" There's no need to call paging_update_cr3 unless CR3 trapping is enabled, and that's only the case when using shadow paging or when requested for introspection purposes, otherwise there's no need to pause all the vCPUs of the domain in order to perform the flush. Check whether CR3 trapping is currently in use in order to decide whether the vCPUs should be paused, otherwise just perform the flush. Signed-off-by: Roger Pau Monné --- xen/arch/x86/hvm/hvm.c | 55 ++++++++++++++++++++++++++++-------------- 1 file changed, 37 insertions(+), 18 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 4dfaf35566..7dcc16afc6 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -3985,25 +3985,36 @@ bool hvm_flush_vcpu_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), static DEFINE_PER_CPU(cpumask_t, flush_cpumask); cpumask_t *mask = &this_cpu(flush_cpumask); struct domain *d = current->domain; + /* + * CR3 trapping is only enabled when running with shadow paging or when + * requested for introspection purposes, otherwise there's no need to call + * paging_update_cr3 and hence pause all vCPUs. + */ + bool trap_cr3 = !paging_mode_hap(d) || + (d->arch.monitor.write_ctrlreg_enabled & + monitor_ctrlreg_bitmask(VM_EVENT_X86_CR3)); struct vcpu *v; - /* Avoid deadlock if more than one vcpu tries this at the same time. */ - if ( !spin_trylock(&d->hypercall_deadlock_mutex) ) - return false; + if ( trap_cr3 ) + { + /* Avoid deadlock if more than one vcpu tries this at the same time. */ + if ( !spin_trylock(&d->hypercall_deadlock_mutex) ) + return false; - /* Pause all other vcpus. */ - for_each_vcpu ( d, v ) - if ( v != current && flush_vcpu(ctxt, v) ) - vcpu_pause_nosync(v); + /* Pause all other vcpus. */ + for_each_vcpu ( d, v ) + if ( v != current && flush_vcpu(ctxt, v) ) + vcpu_pause_nosync(v); - /* Now that all VCPUs are signalled to deschedule, we wait... */ - for_each_vcpu ( d, v ) - if ( v != current && flush_vcpu(ctxt, v) ) - while ( !vcpu_runnable(v) && v->is_running ) - cpu_relax(); + /* Now that all VCPUs are signalled to deschedule, we wait... */ + for_each_vcpu ( d, v ) + if ( v != current && flush_vcpu(ctxt, v) ) + while ( !vcpu_runnable(v) && v->is_running ) + cpu_relax(); - /* All other vcpus are paused, safe to unlock now. */ - spin_unlock(&d->hypercall_deadlock_mutex); + /* All other vcpus are paused, safe to unlock now. */ + spin_unlock(&d->hypercall_deadlock_mutex); + } cpumask_clear(mask); @@ -4015,8 +4026,15 @@ bool hvm_flush_vcpu_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), if ( !flush_vcpu(ctxt, v) ) continue; - paging_update_cr3(v, false); + if ( trap_cr3 ) + paging_update_cr3(v, false); + /* + * It's correct to do this flush without pausing the vCPUs: any vCPU + * context switch will already flush the tlb and the worse that could + * happen is that Xen ends up performing flushes on pCPUs that are no + * longer running the target vCPUs. + */ cpu = read_atomic(&v->dirty_cpu); if ( is_vcpu_dirty_cpu(cpu) ) __cpumask_set_cpu(cpu, mask); @@ -4026,9 +4044,10 @@ bool hvm_flush_vcpu_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), flush_tlb_mask(mask); /* Done. */ - for_each_vcpu ( d, v ) - if ( v != current && flush_vcpu(ctxt, v) ) - vcpu_unpause(v); + if ( trap_cr3 ) + for_each_vcpu ( d, v ) + if ( v != current && flush_vcpu(ctxt, v) ) + vcpu_unpause(v); return true; }