From patchwork Fri Oct 11 11:07:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 11185217 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1C87D17D4 for ; Fri, 11 Oct 2019 11:07:33 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E73D0206CD for ; Fri, 11 Oct 2019 11:07:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="U9GPLdNB" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E73D0206CD Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=96QkoXlBdznNjNF/v3avGjRzB3we8YoA7ULNBQyizGU=; b=U9GPLdNBqPrHnqazcqZyekHYXy ywHiOFWBmISc4E4c/PQPm2yfbtyERIioBnvcn1/Y0uG5Q7GdTtcJWkMRc4h/0DIb+oYe6HH5EUUmL Amw7EgL5HhIxkDaVvzE1aljTfF1gzkXXFQxueElF8o2ull06kkz3chDHmPQByZRECqyBkp6j5ON/w F4WM9Z8LJDD3myIwuQ3oeC2+/eEs4WoeeB/OoX8OQNzwzikX8t4sAt8Xv96Jx/BwCy5zzSIOoFuG4 PE2f7BpKObLKhoW06YOhYZGGQVf9RH5CXyyZZujmTTJzgZcf2kOA/0p8jLsYQx9B1Q9VmoujWS7pw 5qhGPMNw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iIsle-0000L0-HH; Fri, 11 Oct 2019 11:07:30 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iIslT-0000AF-Dk for linux-arm-kernel@lists.infradead.org; Fri, 11 Oct 2019 11:07:21 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 449D71000; Fri, 11 Oct 2019 04:07:17 -0700 (PDT) Received: from localhost (e113682-lin.copenhagen.arm.com [10.32.145.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CC9C03F703; Fri, 11 Oct 2019 04:07:16 -0700 (PDT) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu Subject: [PATCH v3 1/2] KVM: arm/arm64: Allow reporting non-ISV data aborts to userspace Date: Fri, 11 Oct 2019 13:07:05 +0200 Message-Id: <20191011110709.2764-2-christoffer.dall@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20191011110709.2764-1-christoffer.dall@arm.com> References: <20191011110709.2764-1-christoffer.dall@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191011_040719_549830_45957080 X-CRM114-Status: GOOD ( 24.13 ) X-Spam-Score: 0.0 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (0.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , =?utf-8?q?Daniel_P=2E_Berrang?= =?utf-8?q?=C3=A9?= , Suzuki K Poulose , Marc Zyngier , Christoffer Dall , James Morse , Julien Thierry , Stefan Hajnoczi , Heinrich Schuchardt , Alexander Graf , linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org For a long time, if a guest accessed memory outside of a memslot using any of the load/store instructions in the architecture which doesn't supply decoding information in the ESR_EL2 (the ISV bit is not set), the kernel would print the following message and terminate the VM as a result of returning -ENOSYS to userspace: load/store instruction decoding not implemented The reason behind this message is that KVM assumes that all accesses outside a memslot is an MMIO access which should be handled by userspace, and we originally expected to eventually implement some sort of decoding of load/store instructions where the ISV bit was not set. However, it turns out that many of the instructions which don't provide decoding information on abort are not safe to use for MMIO accesses, and the remaining few that would potentially make sense to use on MMIO accesses, such as those with register writeback, are not used in practice. It also turns out that fetching an instruction from guest memory can be a pretty horrible affair, involving stopping all CPUs on SMP systems, handling multiple corner cases of address translation in software, and more. It doesn't appear likely that we'll ever implement this in the kernel. What is much more common is that a user has misconfigured his/her guest and is actually not accessing an MMIO region, but just hitting some random hole in the IPA space. In this scenario, the error message above is almost misleading and has led to a great deal of confusion over the years. It is, nevertheless, ABI to userspace, and we therefore need to introduce a new capability that userspace explicitly enables to change behavior. This patch introduces KVM_CAP_ARM_NISV_TO_USER (NISV meaning Non-ISV) which does exactly that, and introduces a new exit reason to report the event to userspace. User space can then emulate an exception to the guest, restart the guest, suspend the guest, or take any other appropriate action as per the policy of the running system. Reported-by: Heinrich Schuchardt Signed-off-by: Christoffer Dall Reviewed-by: Alexander Graf --- Documentation/virt/kvm/api.txt | 33 ++++++++++++++++++++++++++++ arch/arm/include/asm/kvm_arm.h | 1 + arch/arm/include/asm/kvm_emulate.h | 5 +++++ arch/arm/include/asm/kvm_host.h | 8 +++++++ arch/arm64/include/asm/kvm_emulate.h | 5 +++++ arch/arm64/include/asm/kvm_host.h | 8 +++++++ include/uapi/linux/kvm.h | 7 ++++++ virt/kvm/arm/arm.c | 21 ++++++++++++++++++ virt/kvm/arm/mmio.c | 9 +++++++- 9 files changed, 96 insertions(+), 1 deletion(-) diff --git a/Documentation/virt/kvm/api.txt b/Documentation/virt/kvm/api.txt index 4833904d32a5..7403f15657c2 100644 --- a/Documentation/virt/kvm/api.txt +++ b/Documentation/virt/kvm/api.txt @@ -4468,6 +4468,39 @@ Hyper-V SynIC state change. Notification is used to remap SynIC event/message pages and to enable/disable SynIC messages/events processing in userspace. + /* KVM_EXIT_ARM_NISV */ + struct { + __u64 esr_iss; + __u64 fault_ipa; + } arm_nisv; + +Used on arm and arm64 systems. If a guest accesses memory not in a memslot, +KVM will typically return to userspace and ask it to do MMIO emulation on its +behalf. However, for certain classes of instructions, no instruction decode +(direction, length of memory access) is provided, and fetching and decoding +the instruction from the VM is overly complicated to live in the kernel. + +Historically, when this situation occurred, KVM would print a warning and kill +the VM. KVM assumed that if the guest accessed non-memslot memory, it was +trying to do I/O, which just couldn't be emulated, and the warning message was +phrased accordingly. However, what happened more often was that a guest bug +caused access outside the guest memory areas which should lead to a more +meaningful warning message and an external abort in the guest, if the access +did not fall within an I/O window. + +Userspace implementations can query for KVM_CAP_ARM_NISV_TO_USER, and enable +this capability at VM creation. Once this is done, these types of errors will +instead return to userspace with KVM_EXIT_ARM_NISV, with the valid bits from +the HSR (arm) and ESR_EL2 (arm64) in the esr_iss field, and the faulting IPA +in the fault_ipa field. Userspace can either fix up the access if it's +actually an I/O access by decoding the instruction from guest memory (if it's +very brave) and continue executing the guest, or it can decide to suspend, +dump, or restart the guest. + +Note that KVM does not skip the faulting instruction as it does for +KVM_EXIT_MMIO, but userspace has to emulate any change to the processing state +if it decides to decode and emulate the instruction. + /* Fix the size of the union. */ char padding[256]; }; diff --git a/arch/arm/include/asm/kvm_arm.h b/arch/arm/include/asm/kvm_arm.h index 0125aa059d5b..9c04bd810d07 100644 --- a/arch/arm/include/asm/kvm_arm.h +++ b/arch/arm/include/asm/kvm_arm.h @@ -162,6 +162,7 @@ #define HSR_ISV (_AC(1, UL) << HSR_ISV_SHIFT) #define HSR_SRT_SHIFT (16) #define HSR_SRT_MASK (0xf << HSR_SRT_SHIFT) +#define HSR_CM (1 << 8) #define HSR_FSC (0x3f) #define HSR_FSC_TYPE (0x3c) #define HSR_SSE (1 << 21) diff --git a/arch/arm/include/asm/kvm_emulate.h b/arch/arm/include/asm/kvm_emulate.h index 40002416efec..e8ef349c04b4 100644 --- a/arch/arm/include/asm/kvm_emulate.h +++ b/arch/arm/include/asm/kvm_emulate.h @@ -167,6 +167,11 @@ static inline bool kvm_vcpu_dabt_isvalid(struct kvm_vcpu *vcpu) return kvm_vcpu_get_hsr(vcpu) & HSR_ISV; } +static inline unsigned long kvm_vcpu_dabt_iss_nisv_sanitized(const struct kvm_vcpu *vcpu) +{ + return kvm_vcpu_get_hsr(vcpu) & (HSR_CM | HSR_WNR | HSR_FSC); +} + static inline bool kvm_vcpu_dabt_iswrite(struct kvm_vcpu *vcpu) { return kvm_vcpu_get_hsr(vcpu) & HSR_WNR; diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 8a37c8e89777..19a92c49039c 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -76,6 +76,14 @@ struct kvm_arch { /* Mandated version of PSCI */ u32 psci_version; + + /* + * If we encounter a data abort without valid instruction syndrome + * information, report this to user space. User space can (and + * should) opt in to this feature if KVM_CAP_ARM_NISV_TO_USER is + * supported. + */ + bool return_nisv_io_abort_to_user; }; #define KVM_NR_MEM_OBJS 40 diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index d69c1efc63e7..a3c967988e1d 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -258,6 +258,11 @@ static inline bool kvm_vcpu_dabt_isvalid(const struct kvm_vcpu *vcpu) return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_ISV); } +static inline unsigned long kvm_vcpu_dabt_iss_nisv_sanitized(const struct kvm_vcpu *vcpu) +{ + return kvm_vcpu_get_hsr(vcpu) & (ESR_ELx_CM | ESR_ELx_WNR | ESR_ELx_FSC); +} + static inline bool kvm_vcpu_dabt_issext(const struct kvm_vcpu *vcpu) { return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SSE); diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index f656169db8c3..019bc560edc1 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -83,6 +83,14 @@ struct kvm_arch { /* Mandated version of PSCI */ u32 psci_version; + + /* + * If we encounter a data abort without valid instruction syndrome + * information, report this to user space. User space can (and + * should) opt in to this feature if KVM_CAP_ARM_NISV_TO_USER is + * supported. + */ + bool return_nisv_io_abort_to_user; }; #define KVM_NR_MEM_OBJS 40 diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 52641d8ca9e8..7336ee8d98d7 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -235,6 +235,7 @@ struct kvm_hyperv_exit { #define KVM_EXIT_S390_STSI 25 #define KVM_EXIT_IOAPIC_EOI 26 #define KVM_EXIT_HYPERV 27 +#define KVM_EXIT_ARM_NISV 28 /* For KVM_EXIT_INTERNAL_ERROR */ /* Emulate instruction failed. */ @@ -394,6 +395,11 @@ struct kvm_run { } eoi; /* KVM_EXIT_HYPERV */ struct kvm_hyperv_exit hyperv; + /* KVM_EXIT_ARM_NISV */ + struct { + __u64 esr_iss; + __u64 fault_ipa; + } arm_nisv; /* Fix the size of the union. */ char padding[256]; }; @@ -1000,6 +1006,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_PMU_EVENT_FILTER 173 #define KVM_CAP_ARM_IRQ_LINE_LAYOUT_2 174 #define KVM_CAP_HYPERV_DIRECT_TLBFLUSH 175 +#define KVM_CAP_ARM_NISV_TO_USER 176 #ifdef KVM_CAP_IRQ_ROUTING diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index 86c6aa1cb58e..e6d56f60e4b6 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -98,6 +98,26 @@ int kvm_arch_check_processor_compat(void) return 0; } +int kvm_vm_ioctl_enable_cap(struct kvm *kvm, + struct kvm_enable_cap *cap) +{ + int r; + + if (cap->flags) + return -EINVAL; + + switch (cap->cap) { + case KVM_CAP_ARM_NISV_TO_USER: + r = 0; + kvm->arch.return_nisv_io_abort_to_user = true; + break; + default: + r = -EINVAL; + break; + } + + return r; +} /** * kvm_arch_init_vm - initializes a VM data structure @@ -197,6 +217,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_IMMEDIATE_EXIT: case KVM_CAP_VCPU_EVENTS: case KVM_CAP_ARM_IRQ_LINE_LAYOUT_2: + case KVM_CAP_ARM_NISV_TO_USER: r = 1; break; case KVM_CAP_ARM_SET_DEVICE_ADDR: diff --git a/virt/kvm/arm/mmio.c b/virt/kvm/arm/mmio.c index 6af5c91337f2..70d3b449692c 100644 --- a/virt/kvm/arm/mmio.c +++ b/virt/kvm/arm/mmio.c @@ -167,7 +167,14 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run, if (ret) return ret; } else { - kvm_err("load/store instruction decoding not implemented\n"); + if (vcpu->kvm->arch.return_nisv_io_abort_to_user) { + run->exit_reason = KVM_EXIT_ARM_NISV; + run->arm_nisv.esr_iss = kvm_vcpu_dabt_iss_nisv_sanitized(vcpu); + run->arm_nisv.fault_ipa = fault_ipa; + return 0; + } + + kvm_pr_unimpl("Data abort outside memslots with no valid syndrome info\n"); return -ENOSYS; } From patchwork Fri Oct 11 11:07:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 11185219 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B355217D4 for ; Fri, 11 Oct 2019 11:07:46 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7438A2084C for ; Fri, 11 Oct 2019 11:07:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="bNII2M+X" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7438A2084C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Vogy48jlQccphOI8h54isnAo0splgIiRP6l5l6hKjME=; b=bNII2M+XxBxCkWH/eo8SRRe05O zn2uc5v47F4FTTMAQeEA49uwVP8rnCoLI3+MnOZ8DjSCozSQBGRxgMbUXarDxK7Znj4bd2Ki1e0CN 7FK94hDWw4YoD71Mpf23cpVz281goLNbhSUtVVim2JdDvg9HYxpEyL+VkSnBzMCfg1udFblWAsBbI 5Vtcm+2IDyirO0wBOIr+kmxPsXJ8IFeruP38aGjMunKwo0PNOYhB2krBLdhaH7zs8UY2lPHK+edZM EwYT4VkPQm84hLp8eCnbcUYdlxKpYreH0fqYFtOmzD+6acXBv8LSuYDEZIco3RkNAUs9USepelxYm wYim8dzA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iIslr-0000al-FR; Fri, 11 Oct 2019 11:07:43 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iIslT-0000Ad-QK for linux-arm-kernel@lists.infradead.org; Fri, 11 Oct 2019 11:07:21 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2B0041570; Fri, 11 Oct 2019 04:07:19 -0700 (PDT) Received: from localhost (e113682-lin.copenhagen.arm.com [10.32.145.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B2D8F3F703; Fri, 11 Oct 2019 04:07:18 -0700 (PDT) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu Subject: [PATCH v3 2/2] KVM: arm/arm64: Allow user injection of external data aborts Date: Fri, 11 Oct 2019 13:07:06 +0200 Message-Id: <20191011110709.2764-3-christoffer.dall@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20191011110709.2764-1-christoffer.dall@arm.com> References: <20191011110709.2764-1-christoffer.dall@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191011_040719_965845_03A75CDB X-CRM114-Status: GOOD ( 21.80 ) X-Spam-Score: 0.0 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (0.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , =?utf-8?q?Daniel_P=2E_Berrang?= =?utf-8?q?=C3=A9?= , Suzuki K Poulose , Marc Zyngier , Christoffer Dall , James Morse , Julien Thierry , Stefan Hajnoczi , Heinrich Schuchardt , Alexander Graf , linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org In some scenarios, such as buggy guest or incorrect configuration of the VMM and firmware description data, userspace will detect a memory access to a portion of the IPA, which is not mapped to any MMIO region. For this purpose, the appropriate action is to inject an external abort to the guest. The kernel already has functionality to inject an external abort, but we need to wire up a signal from user space that lets user space tell the kernel to do this. It turns out, we already have the set event functionality which we can perfectly reuse for this. Signed-off-by: Christoffer Dall Reviewed-by: Alexander Graf --- Documentation/virt/kvm/api.txt | 22 +++++++++++++++++++++- arch/arm/include/uapi/asm/kvm.h | 3 ++- arch/arm/kvm/guest.c | 10 ++++++++++ arch/arm64/include/uapi/asm/kvm.h | 3 ++- arch/arm64/kvm/guest.c | 10 ++++++++++ arch/arm64/kvm/inject_fault.c | 4 ++-- include/uapi/linux/kvm.h | 1 + virt/kvm/arm/arm.c | 1 + 8 files changed, 49 insertions(+), 5 deletions(-) diff --git a/Documentation/virt/kvm/api.txt b/Documentation/virt/kvm/api.txt index 7403f15657c2..bd29d44af32b 100644 --- a/Documentation/virt/kvm/api.txt +++ b/Documentation/virt/kvm/api.txt @@ -1002,12 +1002,18 @@ Specifying exception.has_esr on a system that does not support it will return -EINVAL. Setting anything other than the lower 24bits of exception.serror_esr will return -EINVAL. +It is not possible to read back a pending external abort (injected via +KVM_SET_VCPU_EVENTS or otherwise) because such an exception is always delivered +directly to the virtual CPU). + + struct kvm_vcpu_events { struct { __u8 serror_pending; __u8 serror_has_esr; + __u8 ext_dabt_pending; /* Align it to 8 bytes */ - __u8 pad[6]; + __u8 pad[5]; __u64 serror_esr; } exception; __u32 reserved[12]; @@ -1051,9 +1057,23 @@ contain a valid state and shall be written into the VCPU. ARM/ARM64: +User space may need to inject several types of events to the guest. + Set the pending SError exception state for this VCPU. It is not possible to 'cancel' an Serror that has been made pending. +If the guest performed an access to I/O memory which could not be handled by +userspace, for example because of missing instruction syndrome decode +information or because there is no device mapped at the accessed IPA, then +userspace can ask the kernel to inject an external abort using the address +from the exiting fault on the VCPU. It is a programming error to set +ext_dabt_pending after an exit which was not either KVM_EXIT_MMIO or +KVM_EXIT_ARM_NISV. This feature is only available if the system supports +KVM_CAP_ARM_INJECT_EXT_DABT. This is a helper which provides commonality in +how userspace reports accesses for the above cases to guests, across different +userspace implementations. Nevertheless, userspace can still emulate all Arm +exceptions by manipulating individual registers using the KVM_SET_ONE_REG API. + See KVM_GET_VCPU_EVENTS for the data structure. diff --git a/arch/arm/include/uapi/asm/kvm.h b/arch/arm/include/uapi/asm/kvm.h index 2769360f195c..03cd7c19a683 100644 --- a/arch/arm/include/uapi/asm/kvm.h +++ b/arch/arm/include/uapi/asm/kvm.h @@ -131,8 +131,9 @@ struct kvm_vcpu_events { struct { __u8 serror_pending; __u8 serror_has_esr; + __u8 ext_dabt_pending; /* Align it to 8 bytes */ - __u8 pad[6]; + __u8 pad[5]; __u64 serror_esr; } exception; __u32 reserved[12]; diff --git a/arch/arm/kvm/guest.c b/arch/arm/kvm/guest.c index 684cf64b4033..735f9b007e58 100644 --- a/arch/arm/kvm/guest.c +++ b/arch/arm/kvm/guest.c @@ -255,6 +255,12 @@ int __kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu, { events->exception.serror_pending = !!(*vcpu_hcr(vcpu) & HCR_VA); + /* + * We never return a pending ext_dabt here because we deliver it to + * the virtual CPU directly when setting the event and it's no longer + * 'pending' at this point. + */ + return 0; } @@ -263,12 +269,16 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, { bool serror_pending = events->exception.serror_pending; bool has_esr = events->exception.serror_has_esr; + bool ext_dabt_pending = events->exception.ext_dabt_pending; if (serror_pending && has_esr) return -EINVAL; else if (serror_pending) kvm_inject_vabt(vcpu); + if (ext_dabt_pending) + kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu)); + return 0; } diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index 67c21f9bdbad..d49c17a80491 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -164,8 +164,9 @@ struct kvm_vcpu_events { struct { __u8 serror_pending; __u8 serror_has_esr; + __u8 ext_dabt_pending; /* Align it to 8 bytes */ - __u8 pad[6]; + __u8 pad[5]; __u64 serror_esr; } exception; __u32 reserved[12]; diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index dfd626447482..ca613a44c6ec 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -712,6 +712,12 @@ int __kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu, if (events->exception.serror_pending && events->exception.serror_has_esr) events->exception.serror_esr = vcpu_get_vsesr(vcpu); + /* + * We never return a pending ext_dabt here because we deliver it to + * the virtual CPU directly when setting the event and it's no longer + * 'pending' at this point. + */ + return 0; } @@ -720,6 +726,7 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, { bool serror_pending = events->exception.serror_pending; bool has_esr = events->exception.serror_has_esr; + bool ext_dabt_pending = events->exception.ext_dabt_pending; if (serror_pending && has_esr) { if (!cpus_have_const_cap(ARM64_HAS_RAS_EXTN)) @@ -733,6 +740,9 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, kvm_inject_vabt(vcpu); } + if (ext_dabt_pending) + kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu)); + return 0; } diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c index a9d25a305af5..ccdb6a051ab2 100644 --- a/arch/arm64/kvm/inject_fault.c +++ b/arch/arm64/kvm/inject_fault.c @@ -109,7 +109,7 @@ static void inject_undef64(struct kvm_vcpu *vcpu) /** * kvm_inject_dabt - inject a data abort into the guest - * @vcpu: The VCPU to receive the undefined exception + * @vcpu: The VCPU to receive the data abort * @addr: The address to report in the DFAR * * It is assumed that this code is called from the VCPU thread and that the @@ -125,7 +125,7 @@ void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr) /** * kvm_inject_pabt - inject a prefetch abort into the guest - * @vcpu: The VCPU to receive the undefined exception + * @vcpu: The VCPU to receive the prefetch abort * @addr: The address to report in the DFAR * * It is assumed that this code is called from the VCPU thread and that the diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 7336ee8d98d7..65db5a4257ec 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1007,6 +1007,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_ARM_IRQ_LINE_LAYOUT_2 174 #define KVM_CAP_HYPERV_DIRECT_TLBFLUSH 175 #define KVM_CAP_ARM_NISV_TO_USER 176 +#define KVM_CAP_ARM_INJECT_EXT_DABT 177 #ifdef KVM_CAP_IRQ_ROUTING diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index e6d56f60e4b6..12064780f1d8 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -218,6 +218,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_VCPU_EVENTS: case KVM_CAP_ARM_IRQ_LINE_LAYOUT_2: case KVM_CAP_ARM_NISV_TO_USER: + case KVM_CAP_ARM_INJECT_EXT_DABT: r = 1; break; case KVM_CAP_ARM_SET_DEVICE_ADDR: From patchwork Fri Oct 11 11:07:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 11185223 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E30BF14ED for ; Fri, 11 Oct 2019 11:07:57 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B9578206CD for ; Fri, 11 Oct 2019 11:07:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="peZYJoqE" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B9578206CD Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=q+GhG/Yi7SgYzzrGPwcDQcB6dLHhD+/ivLIA+Rbd1ak=; b=peZYJoqEvPD2Gx7SrBPLxGLkLt oc0/g5ZawryvDkE22VBZoR04FW7vKsZvx2Sznxfdepbld4g3xstj37Yz6PKqvcFHnbBLwb6NQR4uY W6ocH3TYfJduJy92nZgYZ1wk3mxhZYpS5P2m7VPrjVyMKE5gLgZwbJtY4tk610jb2aHyu3Cz2KAhR 1UWcCLiyInceaCNE6iO6f5kjasuC56/Ft+O818QnK+g6xv2c0dT3aESQKAFUChx59djohIEYqLcam OkdzArZRoJCj4rC1ZXlNxrzE1C+gWSEXWiT5dKbHL7Yw71pjG+VJvyDd/wZbTR4gM0neeK7j4hI1X ls7gvEWQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iIsm5-0000qK-8I; Fri, 11 Oct 2019 11:07:57 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iIslV-0000Ba-Nx for linux-arm-kernel@lists.infradead.org; Fri, 11 Oct 2019 11:07:23 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1191528; Fri, 11 Oct 2019 04:07:21 -0700 (PDT) Received: from localhost (e113682-lin.copenhagen.arm.com [10.32.145.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 996853F703; Fri, 11 Oct 2019 04:07:20 -0700 (PDT) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu Subject: [kvmtool v3 3/5] update headers: Update the KVM headers for new Arm fault reporting features Date: Fri, 11 Oct 2019 13:07:07 +0200 Message-Id: <20191011110709.2764-4-christoffer.dall@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20191011110709.2764-1-christoffer.dall@arm.com> References: <20191011110709.2764-1-christoffer.dall@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191011_040721_865730_F91395DB X-CRM114-Status: GOOD ( 12.85 ) X-Spam-Score: 0.0 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (0.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , =?utf-8?q?Daniel_P=2E_Berrang?= =?utf-8?q?=C3=A9?= , Suzuki K Poulose , Marc Zyngier , Christoffer Dall , James Morse , Julien Thierry , Stefan Hajnoczi , Heinrich Schuchardt , Alexander Graf , linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org In preparation for improving our handling of guest aborts with missing decode info or outside any mapped resource, sync updated Linux header files. NOTE: This is a development update and these headers are not yet in an upstream tree. DO NOT MERGE. Signed-off-by: Christoffer Dall --- arm/aarch32/include/asm/kvm.h | 3 ++- arm/aarch64/include/asm/kvm.h | 3 ++- include/linux/kvm.h | 8 ++++++++ 3 files changed, 12 insertions(+), 2 deletions(-) diff --git a/arm/aarch32/include/asm/kvm.h b/arm/aarch32/include/asm/kvm.h index 4602464..b450900 100644 --- a/arm/aarch32/include/asm/kvm.h +++ b/arm/aarch32/include/asm/kvm.h @@ -131,8 +131,9 @@ struct kvm_vcpu_events { struct { __u8 serror_pending; __u8 serror_has_esr; + __u8 ext_dabt_pending; /* Align it to 8 bytes */ - __u8 pad[6]; + __u8 pad[5]; __u64 serror_esr; } exception; __u32 reserved[12]; diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h index 97c3478..e4cf9bd 100644 --- a/arm/aarch64/include/asm/kvm.h +++ b/arm/aarch64/include/asm/kvm.h @@ -160,8 +160,9 @@ struct kvm_vcpu_events { struct { __u8 serror_pending; __u8 serror_has_esr; + __u8 ext_dabt_pending; /* Align it to 8 bytes */ - __u8 pad[6]; + __u8 pad[5]; __u64 serror_esr; } exception; __u32 reserved[12]; diff --git a/include/linux/kvm.h b/include/linux/kvm.h index 6d4ea4b..fadebb4 100644 --- a/include/linux/kvm.h +++ b/include/linux/kvm.h @@ -235,6 +235,7 @@ struct kvm_hyperv_exit { #define KVM_EXIT_S390_STSI 25 #define KVM_EXIT_IOAPIC_EOI 26 #define KVM_EXIT_HYPERV 27 +#define KVM_EXIT_ARM_NISV 28 /* For KVM_EXIT_INTERNAL_ERROR */ /* Emulate instruction failed. */ @@ -392,6 +393,11 @@ struct kvm_run { } eoi; /* KVM_EXIT_HYPERV */ struct kvm_hyperv_exit hyperv; + /* KVM_EXIT_ARM_NISV */ + struct { + __u64 esr_iss; + __u64 fault_ipa; + } arm_nisv; /* Fix the size of the union. */ char padding[256]; }; @@ -988,6 +994,8 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_ARM_VM_IPA_SIZE 165 #define KVM_CAP_MANUAL_DIRTY_LOG_PROTECT 166 #define KVM_CAP_HYPERV_CPUID 167 +#define KVM_CAP_ARM_NISV_TO_USER 176 +#define KVM_CAP_ARM_INJECT_EXT_DABT 177 #ifdef KVM_CAP_IRQ_ROUTING From patchwork Fri Oct 11 11:07:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 11185225 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C8D5417D4 for ; Fri, 11 Oct 2019 11:08:15 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 91966206CD for ; Fri, 11 Oct 2019 11:08:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="aGpZ6+7n" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 91966206CD Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Xo9mjPrkgIOlyQRywV5inWoO1gEpwdJ6v34N3yFv8dw=; b=aGpZ6+7n5KaSk7mBkRhJMFp+zX i1LitcKJLWtqQ0YofeP14VCw2ydUCixPKPVorXlqC+LSHqN52Svsr5dTmEMTPAjGPUSM4NsiAwPjY 6Q8gaRcKCXLa1lSFx6RncAQnai8G/jt9mKMAN7D9R/uVlBHJUT8BOlS4AFa5SdVTyfZYrGaWpQq3z 2vLF71ca3/RofoW7r3FI2tV+Fx6DZxIwEi9X1BwWkg19UNSCBikM+DLWZLpAS0hR/gKCnLvGuXXmU dUxRjnfSlvVNQsIgnxB2iU1c4gqG+s1VI45zy56rypjUyWrbD2D86v3ax9mkZiTtx+O75Za/H3wQi 4mNkjRYA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iIsmI-00014A-Dg; Fri, 11 Oct 2019 11:08:10 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iIslX-0000DO-Fe for linux-arm-kernel@lists.infradead.org; Fri, 11 Oct 2019 11:07:25 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EC8231000; Fri, 11 Oct 2019 04:07:22 -0700 (PDT) Received: from localhost (e113682-lin.copenhagen.arm.com [10.32.145.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7F2163F703; Fri, 11 Oct 2019 04:07:22 -0700 (PDT) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu Subject: [kvmtool v3 4/5] arm: Handle exits from undecoded load/store instructions Date: Fri, 11 Oct 2019 13:07:08 +0200 Message-Id: <20191011110709.2764-5-christoffer.dall@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20191011110709.2764-1-christoffer.dall@arm.com> References: <20191011110709.2764-1-christoffer.dall@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191011_040723_664980_0C867D6D X-CRM114-Status: GOOD ( 17.61 ) X-Spam-Score: 0.0 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (0.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , =?utf-8?q?Daniel_P=2E_Berrang?= =?utf-8?q?=C3=A9?= , Suzuki K Poulose , Marc Zyngier , Christoffer Dall , James Morse , Julien Thierry , Stefan Hajnoczi , Heinrich Schuchardt , Alexander Graf , linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org KVM occasionally encounters guests that attempt to access memory outside the registered RAM memory slots using instructions that don't provide decoding information in the ESR_EL2 (the ISV bit is not set), and historically this has led to the kernel printing a confusing error message in dmesg and returning -ENOYSYS from KVM_RUN. KVM/Arm now has KVM_CAP_ARM_NISV_TO_USER, which can be enabled from userspace, and which allows us to handle this with a little bit more helpful information to the user. For example, we can at least tell the user if the guest just hit a hole in the guest's memory map, or if this appeared to be an attempt at doing MMIO. Signed-off-by: Christoffer Dall --- arm/kvm-cpu.c | 20 +++++++++++++++++++- arm/kvm.c | 8 ++++++++ include/kvm/kvm.h | 1 + kvm.c | 1 + mmio.c | 11 +++++++++++ 5 files changed, 40 insertions(+), 1 deletion(-) diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c index 7780251..25bd3ed 100644 --- a/arm/kvm-cpu.c +++ b/arm/kvm-cpu.c @@ -136,7 +136,25 @@ void kvm_cpu__delete(struct kvm_cpu *vcpu) bool kvm_cpu__handle_exit(struct kvm_cpu *vcpu) { - return false; + switch (vcpu->kvm_run->exit_reason) { + case KVM_EXIT_ARM_NISV: { + u64 phys_addr = vcpu->kvm_run->arm_nisv.fault_ipa; + + if (!arm_addr_in_ioport_region(phys_addr) && + !kvm__mmio_exists(vcpu, phys_addr)) + die("Guest accessed memory outside RAM and IO ranges"); + + /* + * We cannot fetch and decode instructions from a KVM guest, + * which used a load/store instruction that doesn't get + * decoded in the ESR towards an I/O device, so we have no + * choice but to exit to the user with an error. + */ + die("Guest accessed I/O device with unsupported load/store instruction"); + } + default: + return false; + } } void kvm_cpu__show_page_tables(struct kvm_cpu *vcpu) diff --git a/arm/kvm.c b/arm/kvm.c index 1f85fc6..2572ac2 100644 --- a/arm/kvm.c +++ b/arm/kvm.c @@ -59,6 +59,8 @@ void kvm__arch_set_cmdline(char *cmdline, bool video) void kvm__arch_init(struct kvm *kvm, const char *hugetlbfs_path, u64 ram_size) { + struct kvm_enable_cap enable_cap = { .flags = 0 }; + /* * Allocate guest memory. We must align our buffer to 64K to * correlate with the maximum guest page size for virtio-mmio. @@ -83,6 +85,12 @@ void kvm__arch_init(struct kvm *kvm, const char *hugetlbfs_path, u64 ram_size) madvise(kvm->arch.ram_alloc_start, kvm->arch.ram_alloc_size, MADV_HUGEPAGE); + if (kvm__supports_extension(kvm, KVM_CAP_ARM_NISV_TO_USER)) { + enable_cap.cap = KVM_CAP_ARM_NISV_TO_USER; + if (ioctl(kvm->vm_fd, KVM_ENABLE_CAP, &enable_cap) < 0) + die("unable to enable NISV_TO_USER capability"); + } + /* Create the virtual GIC. */ if (gic__create(kvm, kvm->cfg.arch.irqchip)) die("Failed to create virtual GIC"); diff --git a/include/kvm/kvm.h b/include/kvm/kvm.h index 7a73818..05d90ee 100644 --- a/include/kvm/kvm.h +++ b/include/kvm/kvm.h @@ -107,6 +107,7 @@ bool kvm__emulate_io(struct kvm_cpu *vcpu, u16 port, void *data, int direction, bool kvm__emulate_mmio(struct kvm_cpu *vcpu, u64 phys_addr, u8 *data, u32 len, u8 is_write); int kvm__register_mem(struct kvm *kvm, u64 guest_phys, u64 size, void *userspace_addr, enum kvm_mem_type type); +bool kvm__mmio_exists(struct kvm_cpu *vcpu, u64 phys_addr); static inline int kvm__register_ram(struct kvm *kvm, u64 guest_phys, u64 size, void *userspace_addr) { diff --git a/kvm.c b/kvm.c index 57c4ff9..03ec43f 100644 --- a/kvm.c +++ b/kvm.c @@ -55,6 +55,7 @@ const char *kvm_exit_reasons[] = { #ifdef CONFIG_PPC64 DEFINE_KVM_EXIT_REASON(KVM_EXIT_PAPR_HCALL), #endif + DEFINE_KVM_EXIT_REASON(KVM_EXIT_ARM_NISV), }; static int pause_event; diff --git a/mmio.c b/mmio.c index 61e1d47..2ab7fa7 100644 --- a/mmio.c +++ b/mmio.c @@ -139,3 +139,14 @@ bool kvm__emulate_mmio(struct kvm_cpu *vcpu, u64 phys_addr, u8 *data, u32 len, u return true; } + +bool kvm__mmio_exists(struct kvm_cpu *vcpu, u64 phys_addr) +{ + struct mmio_mapping *mmio; + + br_read_lock(vcpu->kvm); + mmio = mmio_search(&mmio_tree, phys_addr, 1); + br_read_unlock(vcpu->kvm); + + return mmio != NULL; +} From patchwork Fri Oct 11 11:07:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 11185227 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B174A17D4 for ; Fri, 11 Oct 2019 11:08:24 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7CF3A206CD for ; Fri, 11 Oct 2019 11:08:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="OnZhjTMC" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7CF3A206CD Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=cGXnibQ0eArWeiy2pi+E7h/QNlPN4KJGGsLh+VCkr8g=; b=OnZhjTMC4nBUm0XprQzwgEJ1Tk ws1Nssp+M2zKwW1Gs+n8wXQOzfyq4o49lQQv8i9Nv7bolUpX2IMONKmXBcGG7ey5cYdAM4oM8hfoC goBP4x15MhLFTtuaIcLN0K4AcF1OpD7N63MbjFnSu21Six1de47Lx4e4PonkUhabNJfvlnN/n3YFP YJhE2V7EugMaxbgBZX3Sx2z0RdIKh8ICuZvvwMKiaKz8WLae/rlnal8N34/rsUv5oI8/8lqrN6I9I PEUSCztLGzvJjKBHXbw5SSY41agthVVswve4N5GbY76V6x62eGOSK6prq5kcVcfLW1zhS8OYbMD3Y Kgwcxmvg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iIsmR-0001Fn-4z; Fri, 11 Oct 2019 11:08:19 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iIslZ-0000G6-J7 for linux-arm-kernel@lists.infradead.org; Fri, 11 Oct 2019 11:07:27 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EFAA51570; Fri, 11 Oct 2019 04:07:24 -0700 (PDT) Received: from localhost (e113682-lin.copenhagen.arm.com [10.32.145.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 655803F703; Fri, 11 Oct 2019 04:07:24 -0700 (PDT) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu Subject: [kvmtool v3 5/5] arm: Inject external data aborts when accessing holes in the memory map Date: Fri, 11 Oct 2019 13:07:09 +0200 Message-Id: <20191011110709.2764-6-christoffer.dall@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20191011110709.2764-1-christoffer.dall@arm.com> References: <20191011110709.2764-1-christoffer.dall@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191011_040725_789510_427F0723 X-CRM114-Status: GOOD ( 13.08 ) X-Spam-Score: 0.0 (/) X-Spam-Report: SpamAssassin version 3.4.2 on bombadil.infradead.org summary: Content analysis details: (0.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , =?utf-8?q?Daniel_P=2E_Berrang?= =?utf-8?q?=C3=A9?= , Suzuki K Poulose , Marc Zyngier , Christoffer Dall , James Morse , Julien Thierry , Stefan Hajnoczi , Heinrich Schuchardt , Alexander Graf , linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Occasionally guests will attempt to access parts of the guest memory map where there is... nothing at all. Until now, we've handled this by either forcefully killing the guest, or silently (unless a debug option was enabled) ignoring the access. Neither is very helpful to a user, who is most likely running either a broken or misconfigured guest. A more appropriate action is to inject an external abort to the guest. Luckily, with KVM_CAP_ARM_INJECT_EXT_DABT, we can use the set event mechanism and ask KVM to do this for us. So we add an architecture specific hook to handle accesses to MMIO regions which cannot be found, and allow them to return if the invalid access was handled or not. Signed-off-by: Christoffer Dall --- arm/include/arm-common/kvm-cpu-arch.h | 16 ++++++++++++++++ arm/kvm-cpu.c | 2 +- mips/include/kvm/kvm-cpu-arch.h | 5 +++++ mmio.c | 3 ++- powerpc/include/kvm/kvm-cpu-arch.h | 5 +++++ x86/include/kvm/kvm-cpu-arch.h | 5 +++++ 6 files changed, 34 insertions(+), 2 deletions(-) diff --git a/arm/include/arm-common/kvm-cpu-arch.h b/arm/include/arm-common/kvm-cpu-arch.h index 923d2c4..33defa2 100644 --- a/arm/include/arm-common/kvm-cpu-arch.h +++ b/arm/include/arm-common/kvm-cpu-arch.h @@ -57,6 +57,22 @@ static inline bool kvm_cpu__emulate_mmio(struct kvm_cpu *vcpu, u64 phys_addr, return kvm__emulate_mmio(vcpu, phys_addr, data, len, is_write); } +static inline bool kvm_cpu__mmio_not_found(struct kvm_cpu *vcpu, u64 phys_addr) +{ + struct kvm_vcpu_events events = { + .exception.ext_dabt_pending = 1, + }; + int err; + + if (!kvm__supports_extension(vcpu->kvm, KVM_CAP_ARM_INJECT_EXT_DABT)) + return false; + + err = ioctl(vcpu->vcpu_fd, KVM_SET_VCPU_EVENTS, &events); + if (err) + die("failed to inject external abort"); + return true; +} + unsigned long kvm_cpu__get_vcpu_mpidr(struct kvm_cpu *vcpu); #endif /* ARM_COMMON__KVM_CPU_ARCH_H */ diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c index 25bd3ed..321a3e4 100644 --- a/arm/kvm-cpu.c +++ b/arm/kvm-cpu.c @@ -142,7 +142,7 @@ bool kvm_cpu__handle_exit(struct kvm_cpu *vcpu) if (!arm_addr_in_ioport_region(phys_addr) && !kvm__mmio_exists(vcpu, phys_addr)) - die("Guest accessed memory outside RAM and IO ranges"); + return kvm_cpu__mmio_not_found(vcpu, phys_addr); /* * We cannot fetch and decode instructions from a KVM guest, diff --git a/mips/include/kvm/kvm-cpu-arch.h b/mips/include/kvm/kvm-cpu-arch.h index 45e69f6..512ab34 100644 --- a/mips/include/kvm/kvm-cpu-arch.h +++ b/mips/include/kvm/kvm-cpu-arch.h @@ -40,4 +40,9 @@ static inline bool kvm_cpu__emulate_mmio(struct kvm_cpu *vcpu, u64 phys_addr, u8 return kvm__emulate_mmio(vcpu, phys_addr, data, len, is_write); } +static inline bool kvm_cpu__mmio_not_found(struct kvm_cpu *vcpu, u64 phys_addr) +{ + return false; +} + #endif /* KVM__KVM_CPU_ARCH_H */ diff --git a/mmio.c b/mmio.c index 2ab7fa7..d6df303 100644 --- a/mmio.c +++ b/mmio.c @@ -130,7 +130,8 @@ bool kvm__emulate_mmio(struct kvm_cpu *vcpu, u64 phys_addr, u8 *data, u32 len, u if (mmio) mmio->mmio_fn(vcpu, phys_addr, data, len, is_write, mmio->ptr); else { - if (vcpu->kvm->cfg.mmio_debug) + if (!kvm_cpu__mmio_not_found(vcpu, phys_addr) && + vcpu->kvm->cfg.mmio_debug) fprintf(stderr, "Warning: Ignoring MMIO %s at %016llx (length %u)\n", to_direction(is_write), (unsigned long long)phys_addr, len); diff --git a/powerpc/include/kvm/kvm-cpu-arch.h b/powerpc/include/kvm/kvm-cpu-arch.h index a69e0cc..64b69b1 100644 --- a/powerpc/include/kvm/kvm-cpu-arch.h +++ b/powerpc/include/kvm/kvm-cpu-arch.h @@ -76,4 +76,9 @@ static inline bool kvm_cpu__emulate_io(struct kvm_cpu *vcpu, u16 port, void *dat bool kvm_cpu__emulate_mmio(struct kvm_cpu *vcpu, u64 phys_addr, u8 *data, u32 len, u8 is_write); +static inline bool kvm_cpu__mmio_not_found(struct kvm_cpu *vcpu, u64 phys_addr) +{ + return false; +} + #endif /* KVM__KVM_CPU_ARCH_H */ diff --git a/x86/include/kvm/kvm-cpu-arch.h b/x86/include/kvm/kvm-cpu-arch.h index 05e5bb6..10cbe6e 100644 --- a/x86/include/kvm/kvm-cpu-arch.h +++ b/x86/include/kvm/kvm-cpu-arch.h @@ -47,4 +47,9 @@ static inline bool kvm_cpu__emulate_mmio(struct kvm_cpu *vcpu, u64 phys_addr, u8 return kvm__emulate_mmio(vcpu, phys_addr, data, len, is_write); } +static inline bool kvm_cpu__mmio_not_found(struct kvm_cpu *vcpu, u64 phys_addr) +{ + return false; +} + #endif /* KVM__KVM_CPU_ARCH_H */