From patchwork Wed Nov 11 20:07:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 11898467 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 84E3C697 for ; Wed, 11 Nov 2020 20:08:07 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4EED3207F7 for ; Wed, 11 Nov 2020 20:08:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4EED3207F7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.25209.52849 (Exim 4.92) (envelope-from ) id 1kcwOv-0003LB-Ps; Wed, 11 Nov 2020 20:07:29 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 25209.52849; Wed, 11 Nov 2020 20:07:29 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kcwOv-0003Kz-LZ; Wed, 11 Nov 2020 20:07:29 +0000 Received: by outflank-mailman (input) for mailman id 25209; Wed, 11 Nov 2020 20:07:27 +0000 Received: from xenbits.xenproject.org ([104.239.192.120]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kcwOt-0003Hk-MB for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 20:07:27 +0000 Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kcwOt-0000pY-Dw; Wed, 11 Nov 2020 20:07:27 +0000 Received: from xenbits.xenproject.org ([104.239.192.120]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kcwOt-0003Hk-MB for xen-devel@lists.xenproject.org; Wed, 11 Nov 2020 20:07:27 +0000 Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kcwOt-0000pY-Dw; Wed, 11 Nov 2020 20:07:27 +0000 From: Paul Durrant To: xen-devel@lists.xenproject.org Cc: Paul Durrant , Wei Liu , Jan Beulich , Andrew Cooper , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Subject: [PATCH 04/10] viridian: use hypercall_vpmask in hvcall_ipi() Date: Wed, 11 Nov 2020 20:07:15 +0000 Message-Id: <20201111200721.30551-5-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201111200721.30551-1-paul@xen.org> References: <20201111200721.30551-1-paul@xen.org> MIME-Version: 1.0 From: Paul Durrant A subsequent patch will need to IPI a mask of virtual processors potentially wider than 64 bits. A previous patch introduced per-cpu hypercall_vpmask to allow hvcall_flush() to deal with such wide masks. This patch modifies the implementation of hvcall_ipi() to make use of the same mask structures, introducing a for_each_vp() macro to facilitate traversing a mask. Signed-off-by: Paul Durrant --- Cc: Wei Liu Cc: Jan Beulich Cc: Andrew Cooper Cc: "Roger Pau Monné" --- xen/arch/x86/hvm/viridian/viridian.c | 43 ++++++++++++++++++++++------ 1 file changed, 35 insertions(+), 8 deletions(-) diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c index 4ab1f14b2248..63f63093a513 100644 --- a/xen/arch/x86/hvm/viridian/viridian.c +++ b/xen/arch/x86/hvm/viridian/viridian.c @@ -533,6 +533,21 @@ static bool vpmask_test(struct hypercall_vpmask *vpmask, unsigned int vp) return test_bit(vp, vpmask->mask); } +static unsigned int vpmask_first(struct hypercall_vpmask *vpmask) +{ + return find_first_bit(vpmask->mask, HVM_MAX_VCPUS); +} + +static unsigned int vpmask_next(struct hypercall_vpmask *vpmask, unsigned int vp) +{ + return find_next_bit(vpmask->mask, HVM_MAX_VCPUS, vp + 1); +} + +#define for_each_vp(vpmask, vp) \ + for ((vp) = vpmask_first(vpmask); \ + (vp) < HVM_MAX_VCPUS; \ + (vp) = vpmask_next(vpmask, vp)) + /* * Windows should not issue the hypercalls requiring this callback in the * case where vcpu_id would exceed the size of the mask. @@ -624,15 +639,24 @@ static int hvcall_flush(union hypercall_input *input, return 0; } +static void send_ipi(struct hypercall_vpmask *vpmask, uint8_t vector) +{ + struct domain *currd = current->domain; + unsigned int vp; + + for_each_vp ( vpmask, vp ) + vlapic_set_irq(vcpu_vlapic(currd->vcpu[vp]), vector, 0); +} + static int hvcall_ipi(union hypercall_input *input, union hypercall_output *output, unsigned long input_params_gpa, unsigned long output_params_gpa) { - struct domain *currd = current->domain; - struct vcpu *v; + struct hypercall_vpmask *vpmask = &this_cpu(hypercall_vpmask); uint32_t vector; uint64_t vcpu_mask; + unsigned int vp; /* Get input parameters. */ if ( input->fast ) @@ -669,17 +693,20 @@ static int hvcall_ipi(union hypercall_input *input, if ( vector < 0x10 || vector > 0xff ) return -EINVAL; - for_each_vcpu ( currd, v ) + vpmask_empty(vpmask); + for (vp = 0; vp < 64; vp++) { - if ( v->vcpu_id >= (sizeof(vcpu_mask) * 8) ) - return -EINVAL; + if ( !vcpu_mask ) + break; - if ( !(vcpu_mask & (1ul << v->vcpu_id)) ) - continue; + if ( vcpu_mask & 1 ) + vpmask_set(vpmask, vp); - vlapic_set_irq(vcpu_vlapic(v), vector, 0); + vcpu_mask >>= 1; } + send_ipi(vpmask, vector); + return 0; }