From patchwork Sun Mar 17 19:35:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Druzhinin X-Patchwork-Id: 10856535 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3C18B15AC for ; Sun, 17 Mar 2019 19:38:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1AA092924B for ; Sun, 17 Mar 2019 19:38:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0ABF7292CD; Sun, 17 Mar 2019 19:38:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 804F42924B for ; Sun, 17 Mar 2019 19:38:26 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5bZo-0005Ln-Ch; Sun, 17 Mar 2019 19:36:08 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1h5bZm-0005Ld-Qi for xen-devel@lists.xenproject.org; Sun, 17 Mar 2019 19:36:06 +0000 X-Inumbo-ID: e72a04f2-48eb-11e9-bc90-bc764e045a96 Received: from SMTP03.CITRIX.COM (unknown [162.221.156.55]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id e72a04f2-48eb-11e9-bc90-bc764e045a96; Sun, 17 Mar 2019 19:36:04 +0000 (UTC) X-IronPort-AV: E=Sophos;i="5.58,491,1544486400"; d="scan'208";a="80809458" From: Igor Druzhinin To: Date: Sun, 17 Mar 2019 19:35:58 +0000 Message-ID: <1552851358-27178-2-git-send-email-igor.druzhinin@citrix.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1552851358-27178-1-git-send-email-igor.druzhinin@citrix.com> References: <1552851358-27178-1-git-send-email-igor.druzhinin@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v4 2/2] x86/hvm: finish IOREQs correctly on completion path X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Igor Druzhinin , wei.liu2@citrix.com, andrew.cooper3@citrix.com, paul.durrant@citrix.com, jbeulich@suse.com, roger.pau@citrix.com Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Since the introduction of linear_{read,write}() helpers in 3bdec530a5 (x86/HVM: split page straddling emulated accesses in more cases) the completion path for IOREQs has been broken: if there is an IOREQ in progress but hvm_copy_{to,from}_guest_linear() returns HVMTRANS_okay (e.g. when P2M type of source/destination has been changed by IOREQ handler) the execution will never re-enter hvmemul_do_io() where IOREQs are completed. This usually results in a domain crash upon the execution of the next IOREQ entering hvmemul_do_io() and finding the remnants of the previous IOREQ in the state machine. This particular issue has been discovered in relation to p2m_ioreq_server type where an emulator changed the memory type between p2m_ioreq_server and p2m_ram_rw in process of responding to IOREQ which made hvm_copy_..() to behave differently on the way back. Fix it for now by checking if IOREQ completion is required (which can be identified by querying MMIO cache) before trying to finish a memory access immediately through hvm_copy_..(), re-enter hvmemul_do_io() otherwise. This change alone only addresses IOREQ completion issue for P2M type changing from MMIO to RAM in the middle of emulation but leaves a case where new IOREQs might be introduced by P2M changes from RAM to MMIO (which is less likely to find in practice) that requires more substantial changes in MMIO emulation code. Reviewed-by: Paul Durrant Signed-off-by: Igor Druzhinin --- Changes in v4: * corrected the cases covered by the change in the description * other minor suggestions --- xen/arch/x86/hvm/emulate.c | 31 +++++++++++++++++++++++++------ 1 file changed, 25 insertions(+), 6 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index c236e7d..bfa3e1a 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -952,7 +952,7 @@ static int hvmemul_phys_mmio_access( * cache indexed by linear MMIO address. */ static struct hvm_mmio_cache *hvmemul_find_mmio_cache( - struct hvm_vcpu_io *vio, unsigned long gla, uint8_t dir) + struct hvm_vcpu_io *vio, unsigned long gla, uint8_t dir, bool create) { unsigned int i; struct hvm_mmio_cache *cache; @@ -966,6 +966,9 @@ static struct hvm_mmio_cache *hvmemul_find_mmio_cache( return cache; } + if ( !create ) + return NULL; + i = vio->mmio_cache_count; if( i == ARRAY_SIZE(vio->mmio_cache) ) return NULL; @@ -1000,7 +1003,7 @@ static int hvmemul_linear_mmio_access( { struct hvm_vcpu_io *vio = ¤t->arch.hvm.hvm_io; unsigned long offset = gla & ~PAGE_MASK; - struct hvm_mmio_cache *cache = hvmemul_find_mmio_cache(vio, gla, dir); + struct hvm_mmio_cache *cache = hvmemul_find_mmio_cache(vio, gla, dir, true); unsigned int chunk, buffer_offset = 0; paddr_t gpa; unsigned long one_rep = 1; @@ -1089,8 +1092,9 @@ static int linear_read(unsigned long addr, unsigned int bytes, void *p_data, uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt) { pagefault_info_t pfinfo; + struct hvm_vcpu_io *vio = ¤t->arch.hvm.hvm_io; unsigned int offset = addr & ~PAGE_MASK; - int rc; + int rc = HVMTRANS_bad_gfn_to_mfn; if ( offset + bytes > PAGE_SIZE ) { @@ -1104,7 +1108,14 @@ static int linear_read(unsigned long addr, unsigned int bytes, void *p_data, return rc; } - rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo); + /* + * If there is an MMIO cache entry for the access then we must be re-issuing + * an access that was previously handled as MMIO. Thus it is imperative that + * we handle this access in the same way to guarantee completion and hence + * clean up any interim state. + */ + if ( !hvmemul_find_mmio_cache(vio, addr, IOREQ_READ, false) ) + rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo); switch ( rc ) { @@ -1135,8 +1146,9 @@ static int linear_write(unsigned long addr, unsigned int bytes, void *p_data, uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt) { pagefault_info_t pfinfo; + struct hvm_vcpu_io *vio = ¤t->arch.hvm.hvm_io; unsigned int offset = addr & ~PAGE_MASK; - int rc; + int rc = HVMTRANS_bad_gfn_to_mfn; if ( offset + bytes > PAGE_SIZE ) { @@ -1150,7 +1162,14 @@ static int linear_write(unsigned long addr, unsigned int bytes, void *p_data, return rc; } - rc = hvm_copy_to_guest_linear(addr, p_data, bytes, pfec, &pfinfo); + /* + * If there is an MMIO cache entry for the access then we must be re-issuing + * an access that was previously handled as MMIO. Thus it is imperative that + * we handle this access in the same way to guarantee completion and hence + * clean up any interim state. + */ + if ( !hvmemul_find_mmio_cache(vio, addr, IOREQ_WRITE, false) ) + rc = hvm_copy_to_guest_linear(addr, p_data, bytes, pfec, &pfinfo); switch ( rc ) {