From patchwork Wed Sep 20 09:22:31 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Stefan ISAILA X-Patchwork-Id: 9961169 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8D2D9601D5 for ; Wed, 20 Sep 2017 09:25:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7C24129044 for ; Wed, 20 Sep 2017 09:25:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6F79229046; Wed, 20 Sep 2017 09:25:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3FB0F29044 for ; Wed, 20 Sep 2017 09:25:09 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dubDR-0007d8-JY; Wed, 20 Sep 2017 09:22:45 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dubDQ-0007cW-6L for xen-devel@lists.xen.org; Wed, 20 Sep 2017 09:22:44 +0000 Received: from [85.158.137.68] by server-16.bemta-3.messagelabs.com id EE/6B-01778-36332C95; Wed, 20 Sep 2017 09:22:43 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrEIsWRWlGSWpSXmKPExsUSfTyjVTfJ+FC kwaNPShZLPi5mcWD0OLr7N1MAYxRrZl5SfkUCa8acObNZCs7uY6x4fPEOYwPjrXbGLkZODmYB a4nef81ANhcHi8AsFol15/exQjgTWCTOL33CAlIlJOAhseT/LCaQhJDAAkaJ7s3H4RLv//1mg UgsY5RYcO8QK0iCTcBA4tXXb2A7RASkJa59vgy2g1ngBJPEtkULmEASwgJhEn9vLWDrYuQA2q cq8XuyBUiYV8BdYv3rZcwgtoSAnMTNc51gNifQshcvHzODlAsB1ay7kj6BEegahlWMGsWpRWW pRbqGFnpJRZnpGSW5iZk5uoYGxnq5qcXFiempOYlJxXrJ+bmbGIHhVc/AwLiD8fdpz0OMkhxM SqK8Sb8PRgrxJeWnVGYkFmfEF5XmpBYfYpTh4FCS4N3wDygnWJSanlqRlpkDDHSYtAQHj5II7 3mQNG9xQWJucWY6ROoUoy5Hx827f5iEWPLy81KlxHmPgBQJgBRllObBjYBF3SVGWSlhXkYGBg YhnoLUotzMElT5V4ziHIxKwrw+/4Gm8GTmlcBtegV0BBPQEdkbDoAcUZKIkJJqYFyhJ/VuwV6 bxoVerpxXNgnz9LTrRKdkXtzMOKnpEGNwJHMck8PFd4IfzrXNb5m+TyPBhD0/x2fGugvpEd3S jZrNDNa35FaX3lhupW/4t5f/fkXi2oX322LyA64Z8EizMUtNyLgV/uB16/lyt5/Wn14/vZPZ5 LU1qbM0zubhwWa9YCfLiQEvlViKMxINtZiLihMB6PyaxLUCAAA= X-Env-Sender: aisaila@bitdefender.com X-Msg-Ref: server-9.tower-31.messagelabs.com!1505899361!60172206!1 X-Originating-IP: [91.199.104.133] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 17445 invoked from network); 20 Sep 2017 09:22:42 -0000 Received: from mx02.bbu.dsd.mx.bitdefender.com (HELO mx02.buh.bitdefender.com) (91.199.104.133) by server-9.tower-31.messagelabs.com with DHE-RSA-AES128-GCM-SHA256 encrypted SMTP; 20 Sep 2017 09:22:42 -0000 Comment: DomainKeys? See http://domainkeys.sourceforge.net/ DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=default; d=bitdefender.com; b=FyxP2MUgfUW3qOlY3D+nO5IUx8nJLlQUPLD/Zx+MZ6OadbMBgrNU+0nVNUQfhCnOMrNO204u83h57RjXoX20F/x5m7hDl6m1QQnySUsQoXskfSp9Yu46nC0Ej8camabNjQ9ZrNxCAgSx10ujVjso06J6l0VZuCcgAGdUyV7OOhOpTorHIOeVdmed0wjqPLHIxpWZkszekYDRJP/bwvPn8XuHCIKnZmhmrJwrq8h3JTFo2wd2zJBlVfIKweKDEb8yeyBTo0MyNSpuMRFWasWoHe3F8wH65aXYnjdPc6RxmPzw2eMkz+sEa+pe8AQTerJxd1Q7jDK95Ep8NiyIdZ/SCg==; h=Received:Received:Received:Received:From:To:Cc:Subject:Date:Message-Id:X-Mailer:In-Reply-To:References; DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=bitdefender.com; h=from:to :cc:subject:date:message-id:in-reply-to:references; s=default; bh=tOqBEcU1YpqiZEx4aCt5P1fq6xg=; b=fwpCNrRjKGlApY/tZlVOMcBFDejN +WYRSJHjtMHz8APt8qQDK2/9HLzdDC4cs4TJdz9JNAn+p/kQCHHBXIOSPrUs6eVh A2yoLD2yt+GcOybzBXooSwP9nLyveCC7KYUQ+4LCrZy7fgrf3HMBEYzyYly5oPSF gE3JOGU0q9IC8VUWvE/ITQgJzMwk5BMPQHhGZinU6+rNX7GwZL7t4EUsdRkIzEly 5S9U0yqOcnB03uuPdTRBlM//aXRbKrKxQ6W/thyqA2OiL8MSiSnNdj06vhh8O8nC kgPzMfLFu23+tdeO4Bw3aubJVgV9Awky3SjEE/F8RHV19xNBPFatnPU/fw== Received: (qmail 31711 invoked from network); 20 Sep 2017 12:22:41 +0300 Received: from mx01robo.bbu.dsd.mx.bitdefender.com (10.17.80.60) by mx02.buh.bitdefender.com with AES128-GCM-SHA256 encrypted SMTP; 20 Sep 2017 12:22:41 +0300 Received: (qmail 11723 invoked from network); 20 Sep 2017 12:22:41 +0300 Received: from unknown (HELO aisaila-Latitude-E5570.dsd.bitdefender.biz) (10.10.195.54) by mx01robo.bbu.dsd.mx.bitdefender.com with SMTP; 20 Sep 2017 12:22:41 +0300 From: Alexandru Isaila To: xen-devel@lists.xen.org Date: Wed, 20 Sep 2017 12:22:31 +0300 Message-Id: <1505899353-13554-2-git-send-email-aisaila@bitdefender.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1505899353-13554-1-git-send-email-aisaila@bitdefender.com> References: <1505899353-13554-1-git-send-email-aisaila@bitdefender.com> Cc: jun.nakajima@intel.com, kevin.tian@intel.com, sstabellini@kernel.org, wei.liu2@citrix.com, suravee.suthikulpanit@amd.com, george.dunlap@eu.citrix.com, andrew.cooper3@citrix.com, tim@xen.org, paul.durrant@citrix.com, jbeulich@suse.com, boris.ostrovsky@oracle.com, ian.jackson@eu.citrix.com Subject: [Xen-devel] [PATCH v4 1/3] x86/hvm: Rename enum hvm_copy_result to hvm_translation_result X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Andrew Cooper Signed-off-by: Andrew Cooper Acked-by: Tim Deegan Acked-by: Jan Beulich Reviewed-by: Kevin Tian Acked-by: George Dunlap Reviewed-by: Boris Ostrovsky --- xen/arch/x86/hvm/dom0_build.c | 2 +- xen/arch/x86/hvm/emulate.c | 40 ++++++++++++++-------------- xen/arch/x86/hvm/hvm.c | 56 +++++++++++++++++++-------------------- xen/arch/x86/hvm/intercept.c | 20 +++++++------- xen/arch/x86/hvm/svm/nestedsvm.c | 5 ++-- xen/arch/x86/hvm/svm/svm.c | 2 +- xen/arch/x86/hvm/viridian.c | 2 +- xen/arch/x86/hvm/vmsi.c | 2 +- xen/arch/x86/hvm/vmx/realmode.c | 2 +- xen/arch/x86/hvm/vmx/vvmx.c | 14 +++++----- xen/arch/x86/mm/shadow/common.c | 12 ++++----- xen/common/libelf/libelf-loader.c | 4 +-- xen/include/asm-x86/hvm/support.h | 40 ++++++++++++++-------------- 13 files changed, 101 insertions(+), 100 deletions(-) diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c index 020c355..e8f746c 100644 --- a/xen/arch/x86/hvm/dom0_build.c +++ b/xen/arch/x86/hvm/dom0_build.c @@ -238,7 +238,7 @@ static int __init pvh_setup_vmx_realmode_helpers(struct domain *d) if ( !pvh_steal_ram(d, HVM_VM86_TSS_SIZE, 128, GB(4), &gaddr) ) { if ( hvm_copy_to_guest_phys(gaddr, NULL, HVM_VM86_TSS_SIZE, v) != - HVMCOPY_okay ) + HVMTRANS_okay ) printk("Unable to zero VM86 TSS area\n"); d->arch.hvm_domain.params[HVM_PARAM_VM86_TSS_SIZED] = VM86_TSS_UPDATED | ((uint64_t)HVM_VM86_TSS_SIZE << 32) | gaddr; diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index 54811c1..cc874ce 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -100,7 +100,7 @@ static int ioreq_server_read(const struct hvm_io_handler *io_handler, uint32_t size, uint64_t *data) { - if ( hvm_copy_from_guest_phys(data, addr, size) != HVMCOPY_okay ) + if ( hvm_copy_from_guest_phys(data, addr, size) != HVMTRANS_okay ) return X86EMUL_UNHANDLEABLE; return X86EMUL_OKAY; @@ -893,18 +893,18 @@ static int __hvmemul_read( switch ( rc ) { - case HVMCOPY_okay: + case HVMTRANS_okay: break; - case HVMCOPY_bad_gva_to_gfn: + case HVMTRANS_bad_linear_to_gfn: x86_emul_pagefault(pfinfo.ec, pfinfo.linear, &hvmemul_ctxt->ctxt); return X86EMUL_EXCEPTION; - case HVMCOPY_bad_gfn_to_mfn: + case HVMTRANS_bad_gfn_to_mfn: if ( access_type == hvm_access_insn_fetch ) return X86EMUL_UNHANDLEABLE; return hvmemul_linear_mmio_read(addr, bytes, p_data, pfec, hvmemul_ctxt, 0); - case HVMCOPY_gfn_paged_out: - case HVMCOPY_gfn_shared: + case HVMTRANS_gfn_paged_out: + case HVMTRANS_gfn_shared: return X86EMUL_RETRY; default: return X86EMUL_UNHANDLEABLE; @@ -1012,15 +1012,15 @@ static int hvmemul_write( switch ( rc ) { - case HVMCOPY_okay: + case HVMTRANS_okay: break; - case HVMCOPY_bad_gva_to_gfn: + case HVMTRANS_bad_linear_to_gfn: x86_emul_pagefault(pfinfo.ec, pfinfo.linear, &hvmemul_ctxt->ctxt); return X86EMUL_EXCEPTION; - case HVMCOPY_bad_gfn_to_mfn: + case HVMTRANS_bad_gfn_to_mfn: return hvmemul_linear_mmio_write(addr, bytes, p_data, pfec, hvmemul_ctxt, 0); - case HVMCOPY_gfn_paged_out: - case HVMCOPY_gfn_shared: + case HVMTRANS_gfn_paged_out: + case HVMTRANS_gfn_shared: return X86EMUL_RETRY; default: return X86EMUL_UNHANDLEABLE; @@ -1384,7 +1384,7 @@ static int hvmemul_rep_movs( return rc; } - rc = HVMCOPY_okay; + rc = HVMTRANS_okay; } else /* @@ -1394,16 +1394,16 @@ static int hvmemul_rep_movs( */ rc = hvm_copy_from_guest_phys(buf, sgpa, bytes); - if ( rc == HVMCOPY_okay ) + if ( rc == HVMTRANS_okay ) rc = hvm_copy_to_guest_phys(dgpa, buf, bytes, current); xfree(buf); - if ( rc == HVMCOPY_gfn_paged_out ) + if ( rc == HVMTRANS_gfn_paged_out ) return X86EMUL_RETRY; - if ( rc == HVMCOPY_gfn_shared ) + if ( rc == HVMTRANS_gfn_shared ) return X86EMUL_RETRY; - if ( rc != HVMCOPY_okay ) + if ( rc != HVMTRANS_okay ) { gdprintk(XENLOG_WARNING, "Failed memory-to-memory REP MOVS: sgpa=%" PRIpaddr" dgpa=%"PRIpaddr" reps=%lu bytes_per_rep=%u\n", @@ -1513,10 +1513,10 @@ static int hvmemul_rep_stos( switch ( rc ) { - case HVMCOPY_gfn_paged_out: - case HVMCOPY_gfn_shared: + case HVMTRANS_gfn_paged_out: + case HVMTRANS_gfn_shared: return X86EMUL_RETRY; - case HVMCOPY_okay: + case HVMTRANS_okay: return X86EMUL_OKAY; } @@ -2172,7 +2172,7 @@ void hvm_emulate_init_per_insn( &addr) && hvm_fetch_from_guest_linear(hvmemul_ctxt->insn_buf, addr, sizeof(hvmemul_ctxt->insn_buf), - pfec, NULL) == HVMCOPY_okay) ? + pfec, NULL) == HVMTRANS_okay) ? sizeof(hvmemul_ctxt->insn_buf) : 0; } else diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 6cb903d..488acbf 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -2915,9 +2915,9 @@ void hvm_task_switch( rc = hvm_copy_from_guest_linear( &tss, prev_tr.base, sizeof(tss), PFEC_page_present, &pfinfo); - if ( rc == HVMCOPY_bad_gva_to_gfn ) + if ( rc == HVMTRANS_bad_linear_to_gfn ) hvm_inject_page_fault(pfinfo.ec, pfinfo.linear); - if ( rc != HVMCOPY_okay ) + if ( rc != HVMTRANS_okay ) goto out; eflags = regs->eflags; @@ -2955,20 +2955,20 @@ void hvm_task_switch( offsetof(typeof(tss), trace) - offsetof(typeof(tss), eip), PFEC_page_present, &pfinfo); - if ( rc == HVMCOPY_bad_gva_to_gfn ) + if ( rc == HVMTRANS_bad_linear_to_gfn ) hvm_inject_page_fault(pfinfo.ec, pfinfo.linear); - if ( rc != HVMCOPY_okay ) + if ( rc != HVMTRANS_okay ) goto out; rc = hvm_copy_from_guest_linear( &tss, tr.base, sizeof(tss), PFEC_page_present, &pfinfo); - if ( rc == HVMCOPY_bad_gva_to_gfn ) + if ( rc == HVMTRANS_bad_linear_to_gfn ) hvm_inject_page_fault(pfinfo.ec, pfinfo.linear); /* - * Note: The HVMCOPY_gfn_shared case could be optimised, if the callee + * Note: The HVMTRANS_gfn_shared case could be optimised, if the callee * functions knew we want RO access. */ - if ( rc != HVMCOPY_okay ) + if ( rc != HVMTRANS_okay ) goto out; new_cpl = tss.eflags & X86_EFLAGS_VM ? 3 : tss.cs & 3; @@ -3010,12 +3010,12 @@ void hvm_task_switch( rc = hvm_copy_to_guest_linear(tr.base + offsetof(typeof(tss), back_link), &tss.back_link, sizeof(tss.back_link), 0, &pfinfo); - if ( rc == HVMCOPY_bad_gva_to_gfn ) + if ( rc == HVMTRANS_bad_linear_to_gfn ) { hvm_inject_page_fault(pfinfo.ec, pfinfo.linear); exn_raised = 1; } - else if ( rc != HVMCOPY_okay ) + else if ( rc != HVMTRANS_okay ) goto out; } @@ -3051,12 +3051,12 @@ void hvm_task_switch( { rc = hvm_copy_to_guest_linear(linear_addr, &errcode, opsz, 0, &pfinfo); - if ( rc == HVMCOPY_bad_gva_to_gfn ) + if ( rc == HVMTRANS_bad_linear_to_gfn ) { hvm_inject_page_fault(pfinfo.ec, pfinfo.linear); exn_raised = 1; } - else if ( rc != HVMCOPY_okay ) + else if ( rc != HVMTRANS_okay ) goto out; } } @@ -3073,7 +3073,7 @@ void hvm_task_switch( #define HVMCOPY_to_guest (1u<<0) #define HVMCOPY_phys (0u<<2) #define HVMCOPY_linear (1u<<2) -static enum hvm_copy_result __hvm_copy( +static enum hvm_translation_result __hvm_copy( void *buf, paddr_t addr, int size, struct vcpu *v, unsigned int flags, uint32_t pfec, pagefault_info_t *pfinfo) { @@ -3098,7 +3098,7 @@ static enum hvm_copy_result __hvm_copy( * Hence we bail immediately if called from atomic context. */ if ( in_atomic() ) - return HVMCOPY_unhandleable; + return HVMTRANS_unhandleable; #endif while ( todo > 0 ) @@ -3113,15 +3113,15 @@ static enum hvm_copy_result __hvm_copy( if ( gfn == gfn_x(INVALID_GFN) ) { if ( pfec & PFEC_page_paged ) - return HVMCOPY_gfn_paged_out; + return HVMTRANS_gfn_paged_out; if ( pfec & PFEC_page_shared ) - return HVMCOPY_gfn_shared; + return HVMTRANS_gfn_shared; if ( pfinfo ) { pfinfo->linear = addr; pfinfo->ec = pfec & ~PFEC_implicit; } - return HVMCOPY_bad_gva_to_gfn; + return HVMTRANS_bad_linear_to_gfn; } gpa |= (paddr_t)gfn << PAGE_SHIFT; } @@ -3139,28 +3139,28 @@ static enum hvm_copy_result __hvm_copy( if ( v == current && !nestedhvm_vcpu_in_guestmode(v) && hvm_mmio_internal(gpa) ) - return HVMCOPY_bad_gfn_to_mfn; + return HVMTRANS_bad_gfn_to_mfn; page = get_page_from_gfn(v->domain, gfn, &p2mt, P2M_UNSHARE); if ( !page ) - return HVMCOPY_bad_gfn_to_mfn; + return HVMTRANS_bad_gfn_to_mfn; if ( p2m_is_paging(p2mt) ) { put_page(page); p2m_mem_paging_populate(v->domain, gfn); - return HVMCOPY_gfn_paged_out; + return HVMTRANS_gfn_paged_out; } if ( p2m_is_shared(p2mt) ) { put_page(page); - return HVMCOPY_gfn_shared; + return HVMTRANS_gfn_shared; } if ( p2m_is_grant(p2mt) ) { put_page(page); - return HVMCOPY_unhandleable; + return HVMTRANS_unhandleable; } p = (char *)__map_domain_page(page) + (addr & ~PAGE_MASK); @@ -3198,24 +3198,24 @@ static enum hvm_copy_result __hvm_copy( put_page(page); } - return HVMCOPY_okay; + return HVMTRANS_okay; } -enum hvm_copy_result hvm_copy_to_guest_phys( +enum hvm_translation_result hvm_copy_to_guest_phys( paddr_t paddr, void *buf, int size, struct vcpu *v) { return __hvm_copy(buf, paddr, size, v, HVMCOPY_to_guest | HVMCOPY_phys, 0, NULL); } -enum hvm_copy_result hvm_copy_from_guest_phys( +enum hvm_translation_result hvm_copy_from_guest_phys( void *buf, paddr_t paddr, int size) { return __hvm_copy(buf, paddr, size, current, HVMCOPY_from_guest | HVMCOPY_phys, 0, NULL); } -enum hvm_copy_result hvm_copy_to_guest_linear( +enum hvm_translation_result hvm_copy_to_guest_linear( unsigned long addr, void *buf, int size, uint32_t pfec, pagefault_info_t *pfinfo) { @@ -3224,7 +3224,7 @@ enum hvm_copy_result hvm_copy_to_guest_linear( PFEC_page_present | PFEC_write_access | pfec, pfinfo); } -enum hvm_copy_result hvm_copy_from_guest_linear( +enum hvm_translation_result hvm_copy_from_guest_linear( void *buf, unsigned long addr, int size, uint32_t pfec, pagefault_info_t *pfinfo) { @@ -3233,7 +3233,7 @@ enum hvm_copy_result hvm_copy_from_guest_linear( PFEC_page_present | pfec, pfinfo); } -enum hvm_copy_result hvm_fetch_from_guest_linear( +enum hvm_translation_result hvm_fetch_from_guest_linear( void *buf, unsigned long addr, int size, uint32_t pfec, pagefault_info_t *pfinfo) { @@ -3670,7 +3670,7 @@ void hvm_ud_intercept(struct cpu_user_regs *regs) sizeof(sig), hvm_access_insn_fetch, cs, &addr) && (hvm_fetch_from_guest_linear(sig, addr, sizeof(sig), - walk, NULL) == HVMCOPY_okay) && + walk, NULL) == HVMTRANS_okay) && (memcmp(sig, "\xf\xbxen", sizeof(sig)) == 0) ) { regs->rip += sizeof(sig); diff --git a/xen/arch/x86/hvm/intercept.c b/xen/arch/x86/hvm/intercept.c index e51efd5..ef82419 100644 --- a/xen/arch/x86/hvm/intercept.c +++ b/xen/arch/x86/hvm/intercept.c @@ -136,14 +136,14 @@ int hvm_process_io_intercept(const struct hvm_io_handler *handler, switch ( hvm_copy_to_guest_phys(p->data + step * i, &data, p->size, current) ) { - case HVMCOPY_okay: + case HVMTRANS_okay: break; - case HVMCOPY_bad_gfn_to_mfn: + case HVMTRANS_bad_gfn_to_mfn: /* Drop the write as real hardware would. */ continue; - case HVMCOPY_bad_gva_to_gfn: - case HVMCOPY_gfn_paged_out: - case HVMCOPY_gfn_shared: + case HVMTRANS_bad_linear_to_gfn: + case HVMTRANS_gfn_paged_out: + case HVMTRANS_gfn_shared: ASSERT_UNREACHABLE(); /* fall through */ default: @@ -164,14 +164,14 @@ int hvm_process_io_intercept(const struct hvm_io_handler *handler, switch ( hvm_copy_from_guest_phys(&data, p->data + step * i, p->size) ) { - case HVMCOPY_okay: + case HVMTRANS_okay: break; - case HVMCOPY_bad_gfn_to_mfn: + case HVMTRANS_bad_gfn_to_mfn: data = ~0; break; - case HVMCOPY_bad_gva_to_gfn: - case HVMCOPY_gfn_paged_out: - case HVMCOPY_gfn_shared: + case HVMTRANS_bad_linear_to_gfn: + case HVMTRANS_gfn_paged_out: + case HVMTRANS_gfn_shared: ASSERT_UNREACHABLE(); /* fall through */ default: diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c index 8fd9c23..66a1777 100644 --- a/xen/arch/x86/hvm/svm/nestedsvm.c +++ b/xen/arch/x86/hvm/svm/nestedsvm.c @@ -357,7 +357,7 @@ static int nsvm_vmrun_permissionmap(struct vcpu *v, bool_t viopm) struct vmcb_struct *host_vmcb = arch_svm->vmcb; unsigned long *ns_msrpm_ptr; unsigned int i; - enum hvm_copy_result ret; + enum hvm_translation_result ret; unsigned long *ns_viomap; bool_t ioport_80 = 1, ioport_ed = 1; @@ -365,7 +365,8 @@ static int nsvm_vmrun_permissionmap(struct vcpu *v, bool_t viopm) ret = hvm_copy_from_guest_phys(svm->ns_cached_msrpm, ns_vmcb->_msrpm_base_pa, MSRPM_SIZE); - if (ret != HVMCOPY_okay) { + if ( ret != HVMTRANS_okay ) + { gdprintk(XENLOG_ERR, "hvm_copy_from_guest_phys msrpm %u\n", ret); return 1; } diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c index 6b19b16..12ddc8a 100644 --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -1266,7 +1266,7 @@ static void svm_emul_swint_injection(struct x86_event *event) PFEC_implicit, &pfinfo); if ( rc ) { - if ( rc == HVMCOPY_bad_gva_to_gfn ) + if ( rc == HVMTRANS_bad_linear_to_gfn ) { fault = TRAP_page_fault; ec = pfinfo.ec; diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c index e0546f3..f0fa59d 100644 --- a/xen/arch/x86/hvm/viridian.c +++ b/xen/arch/x86/hvm/viridian.c @@ -914,7 +914,7 @@ int viridian_hypercall(struct cpu_user_regs *regs) /* Get input parameters. */ if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa, - sizeof(input_params)) != HVMCOPY_okay ) + sizeof(input_params)) != HVMTRANS_okay ) break; /* diff --git a/xen/arch/x86/hvm/vmsi.c b/xen/arch/x86/hvm/vmsi.c index 9b35e9b..7126de7 100644 --- a/xen/arch/x86/hvm/vmsi.c +++ b/xen/arch/x86/hvm/vmsi.c @@ -609,7 +609,7 @@ void msix_write_completion(struct vcpu *v) if ( desc && hvm_copy_from_guest_phys(&data, v->arch.hvm_vcpu.hvm_io.msix_snoop_gpa, - sizeof(data)) == HVMCOPY_okay && + sizeof(data)) == HVMTRANS_okay && !(data & PCI_MSIX_VECTOR_BITMASK) ) ctrl_address = snoop_addr; } diff --git a/xen/arch/x86/hvm/vmx/realmode.c b/xen/arch/x86/hvm/vmx/realmode.c index 11bde58..12d43ad 100644 --- a/xen/arch/x86/hvm/vmx/realmode.c +++ b/xen/arch/x86/hvm/vmx/realmode.c @@ -40,7 +40,7 @@ static void realmode_deliver_exception( last_byte = (vector * 4) + 3; if ( idtr->limit < last_byte || hvm_copy_from_guest_phys(&cs_eip, idtr->base + vector * 4, 4) != - HVMCOPY_okay ) + HVMTRANS_okay ) { /* Software interrupt? */ if ( insn_len != 0 ) diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c index e2361a1..cd0ee0a 100644 --- a/xen/arch/x86/hvm/vmx/vvmx.c +++ b/xen/arch/x86/hvm/vmx/vvmx.c @@ -481,9 +481,9 @@ static int decode_vmx_inst(struct cpu_user_regs *regs, int rc = hvm_copy_from_guest_linear(poperandS, base, size, 0, &pfinfo); - if ( rc == HVMCOPY_bad_gva_to_gfn ) + if ( rc == HVMTRANS_bad_linear_to_gfn ) hvm_inject_page_fault(pfinfo.ec, pfinfo.linear); - if ( rc != HVMCOPY_okay ) + if ( rc != HVMTRANS_okay ) return X86EMUL_EXCEPTION; } decode->mem = base; @@ -1468,7 +1468,7 @@ int nvmx_handle_vmxon(struct cpu_user_regs *regs) } rc = hvm_copy_from_guest_phys(&nvmcs_revid, gpa, sizeof(nvmcs_revid)); - if ( rc != HVMCOPY_okay || + if ( rc != HVMTRANS_okay || (nvmcs_revid & ~VMX_BASIC_REVISION_MASK) || ((nvmcs_revid ^ vmx_basic_msr) & VMX_BASIC_REVISION_MASK) ) { @@ -1746,9 +1746,9 @@ int nvmx_handle_vmptrst(struct cpu_user_regs *regs) gpa = nvcpu->nv_vvmcxaddr; rc = hvm_copy_to_guest_linear(decode.mem, &gpa, decode.len, 0, &pfinfo); - if ( rc == HVMCOPY_bad_gva_to_gfn ) + if ( rc == HVMTRANS_bad_linear_to_gfn ) hvm_inject_page_fault(pfinfo.ec, pfinfo.linear); - if ( rc != HVMCOPY_okay ) + if ( rc != HVMTRANS_okay ) return X86EMUL_EXCEPTION; vmsucceed(regs); @@ -1835,9 +1835,9 @@ int nvmx_handle_vmread(struct cpu_user_regs *regs) switch ( decode.type ) { case VMX_INST_MEMREG_TYPE_MEMORY: rc = hvm_copy_to_guest_linear(decode.mem, &value, decode.len, 0, &pfinfo); - if ( rc == HVMCOPY_bad_gva_to_gfn ) + if ( rc == HVMTRANS_bad_linear_to_gfn ) hvm_inject_page_fault(pfinfo.ec, pfinfo.linear); - if ( rc != HVMCOPY_okay ) + if ( rc != HVMTRANS_okay ) return X86EMUL_EXCEPTION; break; case VMX_INST_MEMREG_TYPE_REG: diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c index 3926ed6..8b9310c 100644 --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -196,16 +196,16 @@ hvm_read(enum x86_segment seg, switch ( rc ) { - case HVMCOPY_okay: + case HVMTRANS_okay: return X86EMUL_OKAY; - case HVMCOPY_bad_gva_to_gfn: + case HVMTRANS_bad_linear_to_gfn: x86_emul_pagefault(pfinfo.ec, pfinfo.linear, &sh_ctxt->ctxt); return X86EMUL_EXCEPTION; - case HVMCOPY_bad_gfn_to_mfn: - case HVMCOPY_unhandleable: + case HVMTRANS_bad_gfn_to_mfn: + case HVMTRANS_unhandleable: return X86EMUL_UNHANDLEABLE; - case HVMCOPY_gfn_paged_out: - case HVMCOPY_gfn_shared: + case HVMTRANS_gfn_paged_out: + case HVMTRANS_gfn_shared: return X86EMUL_RETRY; } diff --git a/xen/common/libelf/libelf-loader.c b/xen/common/libelf/libelf-loader.c index c8b7ec9..0f46872 100644 --- a/xen/common/libelf/libelf-loader.c +++ b/xen/common/libelf/libelf-loader.c @@ -154,10 +154,10 @@ static elf_errorstatus elf_memcpy(struct vcpu *v, void *dst, void *src, #ifdef CONFIG_X86 if ( is_hvm_vcpu(v) ) { - enum hvm_copy_result rc; + enum hvm_translation_result rc; rc = hvm_copy_to_guest_phys((paddr_t)dst, src, size, v); - return rc != HVMCOPY_okay ? -1 : 0; + return rc != HVMTRANS_okay ? -1 : 0; } #endif diff --git a/xen/include/asm-x86/hvm/support.h b/xen/include/asm-x86/hvm/support.h index b18dbb6..e3b035d 100644 --- a/xen/include/asm-x86/hvm/support.h +++ b/xen/include/asm-x86/hvm/support.h @@ -53,23 +53,23 @@ extern unsigned int opt_hvm_debug_level; extern unsigned long hvm_io_bitmap[]; -enum hvm_copy_result { - HVMCOPY_okay = 0, - HVMCOPY_bad_gva_to_gfn, - HVMCOPY_bad_gfn_to_mfn, - HVMCOPY_unhandleable, - HVMCOPY_gfn_paged_out, - HVMCOPY_gfn_shared, +enum hvm_translation_result { + HVMTRANS_okay, + HVMTRANS_bad_linear_to_gfn, + HVMTRANS_bad_gfn_to_mfn, + HVMTRANS_unhandleable, + HVMTRANS_gfn_paged_out, + HVMTRANS_gfn_shared, }; /* * Copy to/from a guest physical address. - * Returns HVMCOPY_okay, else HVMCOPY_bad_gfn_to_mfn if the given physical + * Returns HVMTRANS_okay, else HVMTRANS_bad_gfn_to_mfn if the given physical * address range does not map entirely onto ordinary machine memory. */ -enum hvm_copy_result hvm_copy_to_guest_phys( +enum hvm_translation_result hvm_copy_to_guest_phys( paddr_t paddr, void *buf, int size, struct vcpu *v); -enum hvm_copy_result hvm_copy_from_guest_phys( +enum hvm_translation_result hvm_copy_from_guest_phys( void *buf, paddr_t paddr, int size); /* @@ -79,13 +79,13 @@ enum hvm_copy_result hvm_copy_from_guest_phys( * to set them. * * Returns: - * HVMCOPY_okay: Copy was entirely successful. - * HVMCOPY_bad_gfn_to_mfn: Some guest physical address did not map to - * ordinary machine memory. - * HVMCOPY_bad_gva_to_gfn: Some guest virtual address did not have a valid - * mapping to a guest physical address. The - * pagefault_info_t structure will be filled in if - * provided. + * HVMTRANS_okay: Copy was entirely successful. + * HVMTRANS_bad_gfn_to_mfn: Some guest physical address did not map to + * ordinary machine memory. + * HVMTRANS_bad_linear_to_gfn: Some guest linear address did not have a + * valid mapping to a guest physical address. + * The pagefault_info_t structure will be filled + * in if provided. */ typedef struct pagefault_info { @@ -93,13 +93,13 @@ typedef struct pagefault_info int ec; } pagefault_info_t; -enum hvm_copy_result hvm_copy_to_guest_linear( +enum hvm_translation_result hvm_copy_to_guest_linear( unsigned long addr, void *buf, int size, uint32_t pfec, pagefault_info_t *pfinfo); -enum hvm_copy_result hvm_copy_from_guest_linear( +enum hvm_translation_result hvm_copy_from_guest_linear( void *buf, unsigned long addr, int size, uint32_t pfec, pagefault_info_t *pfinfo); -enum hvm_copy_result hvm_fetch_from_guest_linear( +enum hvm_translation_result hvm_fetch_from_guest_linear( void *buf, unsigned long addr, int size, uint32_t pfec, pagefault_info_t *pfinfo);