From patchwork Fri Nov 20 09:48:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 11919875 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5ED97C64E7A for ; Fri, 20 Nov 2020 09:49:32 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DAFD1222BA for ; Fri, 20 Nov 2020 09:49:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=xen.org header.i=@xen.org header.b="KJU6mAPn" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DAFD1222BA Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.31841.62645 (Exim 4.92) (envelope-from ) id 1kg32Y-0005QZ-Mc; Fri, 20 Nov 2020 09:49:14 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 31841.62645; Fri, 20 Nov 2020 09:49:14 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kg32Y-0005Q7-5U; Fri, 20 Nov 2020 09:49:14 +0000 Received: by outflank-mailman (input) for mailman id 31841; Fri, 20 Nov 2020 09:49:13 +0000 Received: from mail.xenproject.org ([104.130.215.37]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kg32X-0005OG-05 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:49:13 +0000 Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kg32W-0002bU-PR; Fri, 20 Nov 2020 09:49:12 +0000 Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kg32W-0003d1-GC; Fri, 20 Nov 2020 09:49:12 +0000 Received: from mail.xenproject.org ([104.130.215.37]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kg32X-0005OG-05 for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 09:49:13 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=3DPdSem6nkuIEJJrD8PvA4dT/KiTua7xMj10EiGKfcs=; b=KJU6mAPnc8ruDs1uB7bGJPrkay jzetR3GNO2fosrikXijWv7Z0epPXuyVcWLd/g0u61LnKMGtryEEY5GkPNNC+8tcGXuT52gaj85VOv HQkICSlJXQ5WVgCxxMGNzFWG2R0bU1DQ0M9X5P7FQsQQlVqOSJvSdzcWAmJjUlrExa74=; Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kg32W-0002bU-PR; Fri, 20 Nov 2020 09:49:12 +0000 Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kg32W-0003d1-GC; Fri, 20 Nov 2020 09:49:12 +0000 From: Paul Durrant To: xen-devel@lists.xenproject.org Cc: Paul Durrant , Wei Liu , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Subject: [PATCH v2 08/12] viridian: add ExProcessorMasks variants of the flush hypercalls Date: Fri, 20 Nov 2020 09:48:56 +0000 Message-Id: <20201120094900.1489-9-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201120094900.1489-1-paul@xen.org> References: <20201120094900.1489-1-paul@xen.org> MIME-Version: 1.0 From: Paul Durrant The Microsoft Hypervisor TLFS specifies variants of the already implemented HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST hypercalls that take a 'Virtual Processor Set' as an argument rather than a simple 64-bit mask. This patch adds a new hvcall_flush_ex() function to implement these (HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE/LIST_EX) hypercalls. This makes use of new helper functions, hv_vpset_nr_banks() and hv_vpset_to_vpmask(), to determine the size of the Virtual Processor Set (so it can be copied from guest memory) and parse it into hypercall_vpmask (respectively). NOTE: A guest should not yet issue these hypercalls as 'ExProcessorMasks' support needs to be advertised via CPUID. This will be done in a subsequent patch. Signed-off-by: Paul Durrant --- Cc: Wei Liu Cc: Jan Beulich Cc: Andrew Cooper Cc: "Roger Pau Monné" v2: - Add helper macros to define mask and struct sizes - Use a union to determine the size of 'hypercall_vpset' - Use hweight64() in hv_vpset_nr_banks() - Sanity check size before hvm_copy_from_guest_phys() --- xen/arch/x86/hvm/viridian/viridian.c | 142 +++++++++++++++++++++++++++ 1 file changed, 142 insertions(+) diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c index d6f47b28c1e6..e736c0739da0 100644 --- a/xen/arch/x86/hvm/viridian/viridian.c +++ b/xen/arch/x86/hvm/viridian/viridian.c @@ -576,6 +576,70 @@ static unsigned int vpmask_nr(const struct hypercall_vpmask *vpmask) return bitmap_weight(vpmask->mask, HVM_MAX_VCPUS); } +#define HV_VPSET_BANK_SIZE \ + sizeof_field(struct hv_vpset, bank_contents[0]) + +#define HV_VPSET_SIZE(banks) \ + (sizeof(struct hv_vpset) + (banks * HV_VPSET_BANK_SIZE)) + +#define HV_VPSET_MAX_BANKS \ + (sizeof_field(struct hv_vpset, valid_bank_mask) * 8) + +struct hypercall_vpset { + union { + struct hv_vpset set; + uint8_t pad[HV_VPSET_SIZE(HV_VPSET_MAX_BANKS)]; + }; +}; + +static DEFINE_PER_CPU(struct hypercall_vpset, hypercall_vpset); + +static unsigned int hv_vpset_nr_banks(struct hv_vpset *vpset) +{ + return hweight64(vpset->valid_bank_mask); +} + +static uint16_t hv_vpset_to_vpmask(struct hv_vpset *set, + struct hypercall_vpmask *vpmask) +{ +#define NR_VPS_PER_BANK (HV_VPSET_BANK_SIZE * 8) + + switch ( set->format ) + { + case HV_GENERIC_SET_ALL: + vpmask_fill(vpmask); + return 0; + + case HV_GENERIC_SET_SPARSE_4K: + { + uint64_t bank_mask; + unsigned int vp, bank = 0; + + vpmask_empty(vpmask); + for ( vp = 0, bank_mask = set->valid_bank_mask; + bank_mask; + vp += NR_VPS_PER_BANK, bank_mask >>= 1 ) + { + if ( bank_mask & 1 ) + { + uint64_t mask = set->bank_contents[bank]; + + vpmask_set(vpmask, vp, mask); + bank++; + } + } + return 0; + } + + default: + break; + } + + return -EINVAL; + +#undef NR_VPS_PER_BANK +} + /* * Windows should not issue the hypercalls requiring this callback in the * case where vcpu_id would exceed the size of the mask. @@ -656,6 +720,78 @@ static int hvcall_flush(union hypercall_input *input, return 0; } +static int hvcall_flush_ex(union hypercall_input *input, + union hypercall_output *output, + unsigned long input_params_gpa, + unsigned long output_params_gpa) +{ + struct hypercall_vpmask *vpmask = &this_cpu(hypercall_vpmask); + struct { + uint64_t address_space; + uint64_t flags; + struct hv_vpset set; + } input_params; + + /* These hypercalls should never use the fast-call convention. */ + if ( input->fast ) + return -EINVAL; + + /* Get input parameters. */ + if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa, + sizeof(input_params)) != HVMTRANS_okay ) + return -EINVAL; + + if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS ) + vpmask_fill(vpmask); + else + { + struct hypercall_vpset *vpset = &this_cpu(hypercall_vpset); + struct hv_vpset *set = &vpset->set; + size_t size; + int rc; + + *set = input_params.set; + if ( set->format == HV_GENERIC_SET_SPARSE_4K ) + { + unsigned long offset = offsetof(typeof(input_params), + set.bank_contents); + + size = sizeof(*set->bank_contents) * hv_vpset_nr_banks(set); + + if ( offsetof(typeof(*vpset), set.bank_contents[0]) + size > + sizeof(*vpset) ) + { + ASSERT_UNREACHABLE(); + return -EINVAL; + } + + if ( hvm_copy_from_guest_phys(&set->bank_contents[0], + input_params_gpa + offset, + size) != HVMTRANS_okay) + return -EINVAL; + + size += sizeof(*set); + } + else + size = sizeof(*set); + + rc = hv_vpset_to_vpmask(set, vpmask); + if ( rc ) + return rc; + } + + /* + * A false return means that another vcpu is currently trying + * a similar operation, so back off. + */ + if ( !paging_flush_tlb(need_flush, vpmask) ) + return -ERESTART; + + output->rep_complete = input->rep_count; + + return 0; +} + static void send_ipi(struct hypercall_vpmask *vpmask, uint8_t vector) { struct domain *currd = current->domain; @@ -769,6 +905,12 @@ int viridian_hypercall(struct cpu_user_regs *regs) output_params_gpa); break; + case HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE_EX: + case HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST_EX: + rc = hvcall_flush_ex(&input, &output, input_params_gpa, + output_params_gpa); + break; + case HVCALL_SEND_IPI: rc = hvcall_ipi(&input, &output, input_params_gpa, output_params_gpa);