From patchwork Wed Feb 19 11:44:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Wei Liu X-Patchwork-Id: 11391281 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3F65592A for ; Wed, 19 Feb 2020 11:45:37 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1BFBD24654 for ; Wed, 19 Feb 2020 11:45:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="sWx9GBh3" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1BFBD24654 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j4NmA-00045k-RQ; Wed, 19 Feb 2020 11:44:22 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j4Nm9-00045V-Ih for xen-devel@lists.xenproject.org; Wed, 19 Feb 2020 11:44:21 +0000 X-Inumbo-ID: 28252406-530d-11ea-b0fd-bc764e2007e4 Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 28252406-530d-11ea-b0fd-bc764e2007e4; Wed, 19 Feb 2020 11:44:16 +0000 (UTC) Received: by mail-wm1-x342.google.com with SMTP id p17so303412wma.1 for ; Wed, 19 Feb 2020 03:44:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wLwrdLrf3Y5Q/7f9/LZJ0C4qbdwwFvoupOHOFtCD0qc=; b=sWx9GBh3YIN/oJvdmw95RWQXEK+wQivNi0yLDsc0Y8If0I10+2os3UZvH3ELQ0ovMe 53U5kkTzZ3w2+XJ5+CBWx4n8PqjyU1cbEj2XRwmZ/u+MmhePIpcZyvmZQzBVyRc1RyJp YaFLw4VNLkURBj2POqBu2LNxQhsh/TskGPSwWzbKYNPEMnUMqhRQDBGhhbR5a6swygc/ kHYgud4DkiJAZGWZjEUdXbG99rL8GKGlTqPLiXsc+WQSHCcyFcYxjLX44ETHPfOxo2qw p7/R16iYOrDl3pn3xt340LsigJ1jpqHWDedvLtKMkt1MlFI1LVjqVhgAt7Yu3+GGg3Od lbow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=wLwrdLrf3Y5Q/7f9/LZJ0C4qbdwwFvoupOHOFtCD0qc=; b=SEdYXhi/JrthiYHJj4gm69qnfuGELpSNTw2bZTp9VJXtVv0RtECbEgZk7BNTKKWFpY 2X+FDV1fGEPLxJZOVnyRwlYg1XILL41r+hc0YkmCnwNmpJw8n4JqkDGoCYLvgozcpcSN xuwrx7rbu7KYUHSKpERhMLwEINr7+7FNy74yirh1Ak5nS726NzSU+/PgyuznhBetUqLT /XD0I+UubnT3Z4UGSbOPvt+1/mMyjK5ZNNVWdktFvpw63RWQ8e1ZCIQQUWCb2xUKUiqC T/4LKHVnqTAU5do5ufjCGyJr2B2DE7PEVMdsRSdGzERM8oqHBwev2Znj/acGtTwpGv3/ BF9g== X-Gm-Message-State: APjAAAUkaqgxDTn/GHAcgv46XyXSBJD8MjtbETDeaizjGKDB9AkqLLlh ewL/MK0WHM6C5BHrgMCRvHDjW8NcusY= X-Google-Smtp-Source: APXvYqwQPz25qXDn3gxOPQ+xBHjcrzFg/R8vfCr+GmL4J5a7uiTr2XLXhF3WRIwDNPiXRBkSz2YYJQ== X-Received: by 2002:a1c:41c4:: with SMTP id o187mr9482555wma.24.1582112655565; Wed, 19 Feb 2020 03:44:15 -0800 (PST) Received: from localhost.localdomain (41.142.6.51.dyn.plus.net. [51.6.142.41]) by smtp.gmail.com with ESMTPSA id q3sm2534657wmj.38.2020.02.19.03.44.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Feb 2020 03:44:15 -0800 (PST) From: Wei Liu X-Google-Original-From: Wei Liu To: Xen Development List Date: Wed, 19 Feb 2020 11:44:09 +0000 Message-Id: <20200219114411.26922-2-liuwe@microsoft.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200219114411.26922-1-liuwe@microsoft.com> References: <20200219114411.26922-1-liuwe@microsoft.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v4 1/3] x86/hypervisor: pass flags to hypervisor_flush_tlb X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Wei Liu , Wei Liu , Andrew Cooper , Paul Durrant , Michael Kelley , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Hyper-V's L0 assisted flush has fine-grained control over what gets flushed. We need all the flags available to make the best decisions possible. No functional change because Xen's implementation doesn't care about what is passed to it. Signed-off-by: Wei Liu Reviewed-by: Roger Pau Monné Reviewed-by: Paul Durrant --- v2: 1. Introduce FLUSH_TLB_FLAGS_MASK --- xen/arch/x86/guest/hypervisor.c | 7 +++++-- xen/arch/x86/guest/xen/xen.c | 2 +- xen/arch/x86/smp.c | 5 ++--- xen/include/asm-x86/flushtlb.h | 3 +++ xen/include/asm-x86/guest/hypervisor.h | 10 +++++----- 5 files changed, 16 insertions(+), 11 deletions(-) diff --git a/xen/arch/x86/guest/hypervisor.c b/xen/arch/x86/guest/hypervisor.c index 47e938e287..6ee28c9df1 100644 --- a/xen/arch/x86/guest/hypervisor.c +++ b/xen/arch/x86/guest/hypervisor.c @@ -75,10 +75,13 @@ void __init hypervisor_e820_fixup(struct e820map *e820) } int hypervisor_flush_tlb(const cpumask_t *mask, const void *va, - unsigned int order) + unsigned int flags) { + if ( flags & ~FLUSH_TLB_FLAGS_MASK ) + return -EINVAL; + if ( ops.flush_tlb ) - return alternative_call(ops.flush_tlb, mask, va, order); + return alternative_call(ops.flush_tlb, mask, va, flags); return -ENOSYS; } diff --git a/xen/arch/x86/guest/xen/xen.c b/xen/arch/x86/guest/xen/xen.c index 5d3427a713..0eb1115c4d 100644 --- a/xen/arch/x86/guest/xen/xen.c +++ b/xen/arch/x86/guest/xen/xen.c @@ -324,7 +324,7 @@ static void __init e820_fixup(struct e820map *e820) pv_shim_fixup_e820(e820); } -static int flush_tlb(const cpumask_t *mask, const void *va, unsigned int order) +static int flush_tlb(const cpumask_t *mask, const void *va, unsigned int flags) { return xen_hypercall_hvm_op(HVMOP_flush_tlbs, NULL); } diff --git a/xen/arch/x86/smp.c b/xen/arch/x86/smp.c index c7caf5bc26..4dab74c0d5 100644 --- a/xen/arch/x86/smp.c +++ b/xen/arch/x86/smp.c @@ -258,9 +258,8 @@ void flush_area_mask(const cpumask_t *mask, const void *va, unsigned int flags) !cpumask_subset(mask, cpumask_of(cpu)) ) { if ( cpu_has_hypervisor && - !(flags & ~(FLUSH_TLB | FLUSH_TLB_GLOBAL | FLUSH_VA_VALID | - FLUSH_ORDER_MASK)) && - !hypervisor_flush_tlb(mask, va, flags & FLUSH_ORDER_MASK) ) + !(flags & ~FLUSH_TLB_FLAGS_MASK) && + !hypervisor_flush_tlb(mask, va, flags) ) { if ( tlb_clk_enabled ) tlb_clk_enabled = false; diff --git a/xen/include/asm-x86/flushtlb.h b/xen/include/asm-x86/flushtlb.h index 9773014320..a4de317452 100644 --- a/xen/include/asm-x86/flushtlb.h +++ b/xen/include/asm-x86/flushtlb.h @@ -123,6 +123,9 @@ void switch_cr3_cr4(unsigned long cr3, unsigned long cr4); /* Flush all HVM guests linear TLB (using ASID/VPID) */ #define FLUSH_GUESTS_TLB 0x4000 +#define FLUSH_TLB_FLAGS_MASK (FLUSH_TLB | FLUSH_TLB_GLOBAL | FLUSH_VA_VALID | \ + FLUSH_ORDER_MASK) + /* Flush local TLBs/caches. */ unsigned int flush_area_local(const void *va, unsigned int flags); #define flush_local(flags) flush_area_local(NULL, flags) diff --git a/xen/include/asm-x86/guest/hypervisor.h b/xen/include/asm-x86/guest/hypervisor.h index 432e57c2a0..48d54735d2 100644 --- a/xen/include/asm-x86/guest/hypervisor.h +++ b/xen/include/asm-x86/guest/hypervisor.h @@ -35,7 +35,7 @@ struct hypervisor_ops { /* Fix up e820 map */ void (*e820_fixup)(struct e820map *e820); /* L0 assisted TLB flush */ - int (*flush_tlb)(const cpumask_t *mask, const void *va, unsigned int order); + int (*flush_tlb)(const cpumask_t *mask, const void *va, unsigned int flags); }; #ifdef CONFIG_GUEST @@ -48,11 +48,11 @@ void hypervisor_e820_fixup(struct e820map *e820); /* * L0 assisted TLB flush. * mask: cpumask of the dirty vCPUs that should be flushed. - * va: linear address to flush, or NULL for global flushes. - * order: order of the linear address pointed by va. + * va: linear address to flush, or NULL for entire address space. + * flags: flags for flushing, including the order of va. */ int hypervisor_flush_tlb(const cpumask_t *mask, const void *va, - unsigned int order); + unsigned int flags); #else @@ -65,7 +65,7 @@ static inline int hypervisor_ap_setup(void) { return 0; } static inline void hypervisor_resume(void) { ASSERT_UNREACHABLE(); } static inline void hypervisor_e820_fixup(struct e820map *e820) {} static inline int hypervisor_flush_tlb(const cpumask_t *mask, const void *va, - unsigned int order) + unsigned int flags) { return -ENOSYS; } From patchwork Wed Feb 19 11:44:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Wei Liu X-Patchwork-Id: 11391279 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 84F5B92A for ; Wed, 19 Feb 2020 11:45:30 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6158F24654 for ; Wed, 19 Feb 2020 11:45:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="c/aQAfyj" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6158F24654 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j4NmG-00047C-5D; Wed, 19 Feb 2020 11:44:28 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j4NmE-00046h-BH for xen-devel@lists.xenproject.org; Wed, 19 Feb 2020 11:44:26 +0000 X-Inumbo-ID: 291939b0-530d-11ea-aa99-bc764e2007e4 Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 291939b0-530d-11ea-aa99-bc764e2007e4; Wed, 19 Feb 2020 11:44:18 +0000 (UTC) Received: by mail-wr1-x444.google.com with SMTP id w15so155182wru.4 for ; Wed, 19 Feb 2020 03:44:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BLEjnFrEzrCENnu1c3Vg1Kq9JDtXw7ou3x1Aj1K6Fbg=; b=c/aQAfyjBQh06fMreeSRYuV3koBfURhV2Rx4b4RgDE1cHPhllgaRXldXorhhOKLogp YTHJmwSIzjG7jkht+C4NxLe7dLSHXj+uWVBW174Kv7mtEi+DSngKSPLs+0onTrYdj52q NdRppHSEfMapoaTBYjCzczaLcDFE/9fbWSkpUIJfLCwWaFp7qKJaZ+vr18a0cnvmqEWt It1v3lqG26b2TCK3iZoxG1CUPatA/7QDlobARF1JCCo7ubOw0TrDPXpjm+8BYxBcAfT7 F0mfOVdYBiAsnYwYDqSqqPVPYmbBNLEUg+Ljo27cKWqSFJ85iZdzi8y4UN+yQ4iyYOu0 tjUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=BLEjnFrEzrCENnu1c3Vg1Kq9JDtXw7ou3x1Aj1K6Fbg=; b=tTi7AJQ2uLMIYFz3ip45UEP2Yq7ldNdvYUAo73lo2jN5GjP1i4dzC6bDFUL//owkrv VJg5AxP2y/2wMHOK2+hoOj6oCzSQ/+TS0u1TL985dNIKzaJmvswApzh7B6pHFq7Wb1JB aju3JKy3+3mYq9/wHOwcCCbvS1G9870AsWCurKMAVDVvkmKqV90DBXIcnhuGW0ptsNUx JTEcf9TbXMblTi30EAaDxBZrTMA9tB3MLt+V5TrjWd1LTUITn5k/VN5291MTZX/Hhupo cYcjrMLXGx0xO+a5rs/5cvOyXb1a544HhkBeA9xx7S7WqSxn58GCWbEW3P23sBvZoE5o CQ5g== X-Gm-Message-State: APjAAAUo7q9HligvZ3bAbhAhhBYza26YpArOSqJr8oa9ko+R8/UHyeLx yDNOHiOXMKnCtEKihAFYch5GI0gObcA= X-Google-Smtp-Source: APXvYqw/beDYgXei5owC6K4OiLanUMvRfieJtaMzU9HZZ4QMljt4ALmJEZYmSbwB7Q7Ap8+LatZ/qQ== X-Received: by 2002:adf:f586:: with SMTP id f6mr34018472wro.46.1582112656550; Wed, 19 Feb 2020 03:44:16 -0800 (PST) Received: from localhost.localdomain (41.142.6.51.dyn.plus.net. [51.6.142.41]) by smtp.gmail.com with ESMTPSA id q3sm2534657wmj.38.2020.02.19.03.44.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Feb 2020 03:44:16 -0800 (PST) From: Wei Liu X-Google-Original-From: Wei Liu To: Xen Development List Date: Wed, 19 Feb 2020 11:44:10 +0000 Message-Id: <20200219114411.26922-3-liuwe@microsoft.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200219114411.26922-1-liuwe@microsoft.com> References: <20200219114411.26922-1-liuwe@microsoft.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v4 2/3] x86/hyperv: skeleton for L0 assisted TLB flush X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Wei Liu , Wei Liu , Andrew Cooper , Paul Durrant , Michael Kelley , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Implement a basic hook for L0 assisted TLB flush. The hook needs to check if prerequisites are met. If they are not met, it returns an error number to fall back to native flushes. Introduce a new variable to indicate if hypercall page is ready. Signed-off-by: Wei Liu Reviewed-by: Roger Pau Monné Reviewed-by: Paul Durrant --- v3: 1. Change hv_hcall_page_ready to hcall_page_ready --- xen/arch/x86/guest/hyperv/Makefile | 1 + xen/arch/x86/guest/hyperv/hyperv.c | 17 ++++++++++++ xen/arch/x86/guest/hyperv/private.h | 4 +++ xen/arch/x86/guest/hyperv/tlb.c | 41 +++++++++++++++++++++++++++++ 4 files changed, 63 insertions(+) create mode 100644 xen/arch/x86/guest/hyperv/tlb.c diff --git a/xen/arch/x86/guest/hyperv/Makefile b/xen/arch/x86/guest/hyperv/Makefile index 68170109a9..18902c33e9 100644 --- a/xen/arch/x86/guest/hyperv/Makefile +++ b/xen/arch/x86/guest/hyperv/Makefile @@ -1 +1,2 @@ obj-y += hyperv.o +obj-y += tlb.o diff --git a/xen/arch/x86/guest/hyperv/hyperv.c b/xen/arch/x86/guest/hyperv/hyperv.c index 70f4cd5ae0..f1b3073712 100644 --- a/xen/arch/x86/guest/hyperv/hyperv.c +++ b/xen/arch/x86/guest/hyperv/hyperv.c @@ -33,6 +33,8 @@ DEFINE_PER_CPU_READ_MOSTLY(void *, hv_input_page); DEFINE_PER_CPU_READ_MOSTLY(void *, hv_vp_assist); DEFINE_PER_CPU_READ_MOSTLY(unsigned int, hv_vp_index); +static bool __read_mostly hcall_page_ready; + static uint64_t generate_guest_id(void) { union hv_guest_os_id id = {}; @@ -119,6 +121,8 @@ static void __init setup_hypercall_page(void) BUG_ON(!hypercall_msr.enable); set_fixmap_x(FIX_X_HYPERV_HCALL, mfn << PAGE_SHIFT); + + hcall_page_ready = true; } static int setup_hypercall_pcpu_arg(void) @@ -199,11 +203,24 @@ static void __init e820_fixup(struct e820map *e820) panic("Unable to reserve Hyper-V hypercall range\n"); } +static int flush_tlb(const cpumask_t *mask, const void *va, + unsigned int flags) +{ + if ( !(ms_hyperv.hints & HV_X64_REMOTE_TLB_FLUSH_RECOMMENDED) ) + return -EOPNOTSUPP; + + if ( !hcall_page_ready || !this_cpu(hv_input_page) ) + return -ENXIO; + + return hyperv_flush_tlb(mask, va, flags); +} + static const struct hypervisor_ops __initdata ops = { .name = "Hyper-V", .setup = setup, .ap_setup = ap_setup, .e820_fixup = e820_fixup, + .flush_tlb = flush_tlb, }; /* diff --git a/xen/arch/x86/guest/hyperv/private.h b/xen/arch/x86/guest/hyperv/private.h index 956eff831f..509bedaafa 100644 --- a/xen/arch/x86/guest/hyperv/private.h +++ b/xen/arch/x86/guest/hyperv/private.h @@ -22,10 +22,14 @@ #ifndef __XEN_HYPERV_PRIVIATE_H__ #define __XEN_HYPERV_PRIVIATE_H__ +#include #include DECLARE_PER_CPU(void *, hv_input_page); DECLARE_PER_CPU(void *, hv_vp_assist); DECLARE_PER_CPU(unsigned int, hv_vp_index); +int hyperv_flush_tlb(const cpumask_t *mask, const void *va, + unsigned int flags); + #endif /* __XEN_HYPERV_PRIVIATE_H__ */ diff --git a/xen/arch/x86/guest/hyperv/tlb.c b/xen/arch/x86/guest/hyperv/tlb.c new file mode 100644 index 0000000000..48f527229e --- /dev/null +++ b/xen/arch/x86/guest/hyperv/tlb.c @@ -0,0 +1,41 @@ +/****************************************************************************** + * arch/x86/guest/hyperv/tlb.c + * + * Support for TLB management using hypercalls + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; If not, see . + * + * Copyright (c) 2020 Microsoft. + */ + +#include +#include + +#include "private.h" + +int hyperv_flush_tlb(const cpumask_t *mask, const void *va, + unsigned int flags) +{ + return -EOPNOTSUPP; +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ From patchwork Wed Feb 19 11:44:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Wei Liu X-Patchwork-Id: 11391283 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3928E92A for ; Wed, 19 Feb 2020 11:45:39 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0AC2424654 for ; Wed, 19 Feb 2020 11:45:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="UjChR2kQ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0AC2424654 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j4NmK-00048s-FI; Wed, 19 Feb 2020 11:44:32 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j4NmJ-00048V-Aq for xen-devel@lists.xenproject.org; Wed, 19 Feb 2020 11:44:31 +0000 X-Inumbo-ID: 2992e972-530d-11ea-bc8e-bc764e2007e4 Received: from mail-wm1-x344.google.com (unknown [2a00:1450:4864:20::344]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 2992e972-530d-11ea-bc8e-bc764e2007e4; Wed, 19 Feb 2020 11:44:19 +0000 (UTC) Received: by mail-wm1-x344.google.com with SMTP id a5so296163wmb.0 for ; Wed, 19 Feb 2020 03:44:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZsyZN1ASO/adFx5xone9EdHci403FlQuyR1mbCEnca4=; b=UjChR2kQTqxBgFu7vZCy5v4g9miaNQB8tk9HL38kHVLRom9Ib6TYCSr/mi05EbuJOr jxvm7gNc3/JESbVN/GY0dzJ9WwCdkAWQjclylZkNCYh0tMG7HpTMGuQNpyWRPIEwjm7o ZAvYS7nnDImAQBFPKqg9ovys17B4lhYzItazd3Pg03806vYWSf4rDx++J8xiiTjhMPYd O/NWOBm7HIrVUcEtXweO/48mtVPJ/YgXa88o/9rC8iD402LOH+UzUqRLC66gU51Ay384 /BDqN4QEx10Sxe+WMNyLpbTcjeACn11Q3Eb/O/eyOPDO7nUe1cfxPGDw4xcV72hRZt13 LaQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=ZsyZN1ASO/adFx5xone9EdHci403FlQuyR1mbCEnca4=; b=q0oAhHnR75avgCLTVR/hVipHsuBgNN9rptDG3zXMxG95oDo/oI97ZovERrkZEqLqLM o5twJmWvF4Ua+/ZMQx2SQG1MisjemmQFMLMFuEqvpAOsF8XUrYrQCra0SJ9lJC3M/7cP /b/kJUsNa60l0vCfY0dU3x8Qk30nF0VoNgcQSfeCq4HNUuTyPvGtWsH+z7sFVg/lSU3c RejfX7gWR/NEm3rkc8mBOVN0GlABwuRXcHNA/Q0D1iHy4D2UHbPjqwwNwKFdsRmW5k/c cJw3e3oPrDfj3XkweCYzunTuF3ma9OyTUWuAz1U0+ddYPJ3nk4gg36y/fZwivDQ7zn9Z Xw7g== X-Gm-Message-State: APjAAAVDQvi13bOF7VSpGmK0+HgMwqNKN15PJysjw6NhK5pkD48szo24 dd3TjUEA2lisrK2dx/AQ8a1r6vAqyig= X-Google-Smtp-Source: APXvYqwic4TUEuKhaJOSoXiNcjKcKEpDOQHnbQqq6gr4x/Qmrppd12tU9DS7wUhcSZW8eOxPn9PwXw== X-Received: by 2002:a1c:a78b:: with SMTP id q133mr9431700wme.28.1582112657643; Wed, 19 Feb 2020 03:44:17 -0800 (PST) Received: from localhost.localdomain (41.142.6.51.dyn.plus.net. [51.6.142.41]) by smtp.gmail.com with ESMTPSA id q3sm2534657wmj.38.2020.02.19.03.44.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Feb 2020 03:44:17 -0800 (PST) From: Wei Liu X-Google-Original-From: Wei Liu To: Xen Development List Date: Wed, 19 Feb 2020 11:44:11 +0000 Message-Id: <20200219114411.26922-4-liuwe@microsoft.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200219114411.26922-1-liuwe@microsoft.com> References: <20200219114411.26922-1-liuwe@microsoft.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v4 3/3] x86/hyperv: L0 assisted TLB flush X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Wei Liu , Wei Liu , Andrew Cooper , Paul Durrant , Michael Kelley , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Implement L0 assisted TLB flush for Xen on Hyper-V. It takes advantage of several hypercalls: * HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST * HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX * HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE * HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX Pick the most efficient hypercalls available. Signed-off-by: Wei Liu Reviewed-by: Roger Pau Monné Reviewed-by: Paul Durrant --- v4: 1. Fix bank mask generation. 2. Fix page order calculation. 3. Remove types.h from private.h. 4. Add a note about nmi and mc handling. v3: 1. Address more comments. 2. Fix usage of max_vp_index. 3. Use the fill_gva_list algorithm from Linux. v2: 1. Address Roger and Jan's comments re types etc. 2. Fix pointer arithmetic. 3. Misc improvement to code. --- xen/arch/x86/guest/hyperv/Makefile | 1 + xen/arch/x86/guest/hyperv/private.h | 8 ++ xen/arch/x86/guest/hyperv/tlb.c | 175 +++++++++++++++++++++++++++- xen/arch/x86/guest/hyperv/util.c | 75 ++++++++++++ 4 files changed, 258 insertions(+), 1 deletion(-) create mode 100644 xen/arch/x86/guest/hyperv/util.c diff --git a/xen/arch/x86/guest/hyperv/Makefile b/xen/arch/x86/guest/hyperv/Makefile index 18902c33e9..0e39410968 100644 --- a/xen/arch/x86/guest/hyperv/Makefile +++ b/xen/arch/x86/guest/hyperv/Makefile @@ -1,2 +1,3 @@ obj-y += hyperv.o obj-y += tlb.o +obj-y += util.o diff --git a/xen/arch/x86/guest/hyperv/private.h b/xen/arch/x86/guest/hyperv/private.h index 509bedaafa..354fc7f685 100644 --- a/xen/arch/x86/guest/hyperv/private.h +++ b/xen/arch/x86/guest/hyperv/private.h @@ -29,7 +29,15 @@ DECLARE_PER_CPU(void *, hv_input_page); DECLARE_PER_CPU(void *, hv_vp_assist); DECLARE_PER_CPU(unsigned int, hv_vp_index); +static inline unsigned int hv_vp_index(unsigned int cpu) +{ + return per_cpu(hv_vp_index, cpu); +} + int hyperv_flush_tlb(const cpumask_t *mask, const void *va, unsigned int flags); +/* Returns number of banks, -ev if error */ +int cpumask_to_vpset(struct hv_vpset *vpset, const cpumask_t *mask); + #endif /* __XEN_HYPERV_PRIVIATE_H__ */ diff --git a/xen/arch/x86/guest/hyperv/tlb.c b/xen/arch/x86/guest/hyperv/tlb.c index 48f527229e..1d723d6ee6 100644 --- a/xen/arch/x86/guest/hyperv/tlb.c +++ b/xen/arch/x86/guest/hyperv/tlb.c @@ -19,17 +19,190 @@ * Copyright (c) 2020 Microsoft. */ +#include #include #include +#include +#include +#include + #include "private.h" +/* + * It is possible to encode up to 4096 pages using the lower 12 bits + * in an element of gva_list + */ +#define HV_TLB_FLUSH_UNIT (4096 * PAGE_SIZE) + +static unsigned int fill_gva_list(uint64_t *gva_list, const void *va, + unsigned int order) +{ + unsigned long cur = (unsigned long)va; + /* end is 1 past the range to be flushed */ + unsigned long end = cur + (PAGE_SIZE << order); + unsigned int n = 0; + + do { + unsigned long diff = end - cur; + + gva_list[n] = cur & PAGE_MASK; + + /* + * Use lower 12 bits to encode the number of additional pages + * to flush + */ + if ( diff >= HV_TLB_FLUSH_UNIT ) + { + gva_list[n] |= ~PAGE_MASK; + cur += HV_TLB_FLUSH_UNIT; + } + else + { + gva_list[n] |= (diff - 1) >> PAGE_SHIFT; + cur = end; + } + + n++; + } while ( cur < end ); + + return n; +} + +static uint64_t flush_tlb_ex(const cpumask_t *mask, const void *va, + unsigned int flags) +{ + struct hv_tlb_flush_ex *flush = this_cpu(hv_input_page); + int nr_banks; + unsigned int max_gvas, order = (flags - 1) & FLUSH_ORDER_MASK; + uint64_t *gva_list; + + if ( !flush || local_irq_is_enabled() ) + { + ASSERT_UNREACHABLE(); + return ~0ULL; + } + + if ( !(ms_hyperv.hints & HV_X64_EX_PROCESSOR_MASKS_RECOMMENDED) ) + return ~0ULL; + + flush->address_space = 0; + flush->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES; + if ( !(flags & FLUSH_TLB_GLOBAL) ) + flush->flags |= HV_FLUSH_NON_GLOBAL_MAPPINGS_ONLY; + + nr_banks = cpumask_to_vpset(&flush->hv_vp_set, mask); + if ( nr_banks < 0 ) + return ~0ULL; + + max_gvas = + (PAGE_SIZE - sizeof(*flush) - nr_banks * + sizeof(flush->hv_vp_set.bank_contents[0])) / + sizeof(uint64_t); /* gva is represented as uint64_t */ + + /* + * Flush the entire address space if va is NULL or if there is not + * enough space for gva_list. + */ + if ( !va || (PAGE_SIZE << order) / HV_TLB_FLUSH_UNIT > max_gvas ) + return hv_do_rep_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX, 0, + nr_banks, virt_to_maddr(flush), 0); + + /* + * The calculation of gva_list address requires the structure to + * be 64 bits aligned. + */ + BUILD_BUG_ON(sizeof(*flush) % sizeof(uint64_t)); + gva_list = (uint64_t *)flush + sizeof(*flush) / sizeof(uint64_t) + nr_banks; + + return hv_do_rep_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX, + fill_gva_list(gva_list, va, order), + nr_banks, virt_to_maddr(flush), 0); +} + +/* Maximum number of gvas for hv_tlb_flush */ +#define MAX_GVAS ((PAGE_SIZE - sizeof(struct hv_tlb_flush)) / sizeof(uint64_t)) + int hyperv_flush_tlb(const cpumask_t *mask, const void *va, unsigned int flags) { - return -EOPNOTSUPP; + unsigned long irq_flags; + struct hv_tlb_flush *flush = this_cpu(hv_input_page); + unsigned int order = (flags - 1) & FLUSH_ORDER_MASK; + uint64_t ret; + + if ( !flush || cpumask_empty(mask) ) + { + ASSERT_UNREACHABLE(); + return -EINVAL; + } + + /* TODO: may need to check if in #NMI or #MC and fallback to native path */ + + local_irq_save(irq_flags); + + flush->address_space = 0; + flush->flags = HV_FLUSH_ALL_VIRTUAL_ADDRESS_SPACES; + flush->processor_mask = 0; + if ( !(flags & FLUSH_TLB_GLOBAL) ) + flush->flags |= HV_FLUSH_NON_GLOBAL_MAPPINGS_ONLY; + + if ( cpumask_equal(mask, &cpu_online_map) ) + flush->flags |= HV_FLUSH_ALL_PROCESSORS; + else + { + unsigned int cpu; + + /* + * Normally VP indices are in ascending order and match Xen's + * idea of CPU ids. Check the last index to see if VP index is + * >= 64. If so, we can skip setting up parameters for + * non-applicable hypercalls without looking further. + */ + if ( hv_vp_index(cpumask_last(mask)) >= 64 ) + goto do_ex_hypercall; + + for_each_cpu ( cpu, mask ) + { + unsigned int vpid = hv_vp_index(cpu); + + if ( vpid >= ms_hyperv.max_vp_index ) + { + local_irq_restore(irq_flags); + return -ENXIO; + } + + if ( vpid >= 64 ) + goto do_ex_hypercall; + + __set_bit(vpid, &flush->processor_mask); + } + } + + /* + * Flush the entire address space if va is NULL or if there is not + * enough space for gva_list. + */ + if ( !va || (PAGE_SIZE << order) / HV_TLB_FLUSH_UNIT > MAX_GVAS ) + ret = hv_do_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE, + virt_to_maddr(flush), 0); + else + ret = hv_do_rep_hypercall(HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST, + fill_gva_list(flush->gva_list, va, order), + 0, virt_to_maddr(flush), 0); + goto done; + + do_ex_hypercall: + ret = flush_tlb_ex(mask, va, flags); + + done: + local_irq_restore(irq_flags); + + return ret & HV_HYPERCALL_RESULT_MASK ? -ENXIO : 0; } +#undef MAX_GVAS + /* * Local variables: * mode: C diff --git a/xen/arch/x86/guest/hyperv/util.c b/xen/arch/x86/guest/hyperv/util.c new file mode 100644 index 0000000000..bec61c2afd --- /dev/null +++ b/xen/arch/x86/guest/hyperv/util.c @@ -0,0 +1,75 @@ +/****************************************************************************** + * arch/x86/guest/hyperv/util.c + * + * Hyper-V utility functions + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; If not, see . + * + * Copyright (c) 2020 Microsoft. + */ + +#include +#include +#include + +#include +#include + +#include "private.h" + +int cpumask_to_vpset(struct hv_vpset *vpset, + const cpumask_t *mask) +{ + int nr = 1; + unsigned int cpu, vcpu_bank, vcpu_offset; + unsigned int max_banks = ms_hyperv.max_vp_index / 64; + + /* Up to 64 banks can be represented by valid_bank_mask */ + if ( max_banks > 64 ) + return -E2BIG; + + /* Clear all banks to avoid flushing unwanted CPUs */ + for ( vcpu_bank = 0; vcpu_bank < max_banks; vcpu_bank++ ) + vpset->bank_contents[vcpu_bank] = 0; + + vpset->format = HV_GENERIC_SET_SPARSE_4K; + + for_each_cpu ( cpu, mask ) + { + unsigned int vcpu = hv_vp_index(cpu); + + vcpu_bank = vcpu / 64; + vcpu_offset = vcpu % 64; + + __set_bit(vcpu_offset, &vpset->bank_contents[vcpu_bank]); + + if ( vcpu_bank >= nr ) + nr = vcpu_bank + 1; + } + + /* Some banks may be empty but that's ok */ + vpset->valid_bank_mask = ~0ULL >> (64 - nr); + + return nr; +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */