From patchwork Wed Aug 25 16:17:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandru Elisei X-Patchwork-Id: 12458247 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98AB0C4338F for ; Wed, 25 Aug 2021 16:23:51 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 60F02610E5 for ; Wed, 25 Aug 2021 16:23:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 60F02610E5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=xNpDTwAM71j+nkn6knqdm/xyVBLdH/TG0jR+QVQbhqE=; b=ySBdinadneNpqm TO2bQgOU+1/13xnHkQyEcV7WbyRV2EWodu1uawJVasCOiYBQcIO7sDr4QfIWffPF56vK/b2w/W3k+ k8nTTaUhVRF3NORKho8I7qdHmHIkQJDnDP5vxZM1qBQPJAhjlfvoLPVjfdnwf2Js1+VdugP+YSOc+ SGR57dC1hCknscCsFBeCnXNtHy1UlxdWOJAalgMMlQeViSfrnvFR8vGazahvqJDtQ3jAtIsrcKIUS +ug8jD/ziqoW6f1J5i+lJao6GhGHD9lYrxKm0nQ6WlLZivMFJrFhQW0uQBE9NycDughy3RvwoBZ75 IZoax4B8fDAIIkO8VTCQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvfO-007iwL-EP; Wed, 25 Aug 2021 16:22:18 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mIvai-007gUm-Rv for linux-arm-kernel@lists.infradead.org; Wed, 25 Aug 2021 16:17:30 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3F9F7101E; Wed, 25 Aug 2021 09:17:28 -0700 (PDT) Received: from monolith.cable.virginm.net (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E222B3F66F; Wed, 25 Aug 2021 09:17:26 -0700 (PDT) From: Alexandru Elisei To: maz@kernel.org, james.morse@arm.com, suzuki.poulose@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v4 10/39] KVM: arm64: Print a warning for unexpected faults on locked memslots Date: Wed, 25 Aug 2021 17:17:46 +0100 Message-Id: <20210825161815.266051-11-alexandru.elisei@arm.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210825161815.266051-1-alexandru.elisei@arm.com> References: <20210825161815.266051-1-alexandru.elisei@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210825_091728_972840_03264315 X-CRM114-Status: GOOD ( 13.74 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When userspace unmaps a VMA backing a memslot, the corresponding stage 2 address range gets unmapped via the MMU notifiers. This makes it possible to get stage 2 faults on a locked memslot, which might not be what userspace wants because the purpose of locking a memslot is to avoid stage 2 faults in the first place. Addresses being unmapped from stage 2 can happen from other reasons too, like bugs in the implementation of the lock memslot API, however unlikely that might seem. Let's try to make debugging easier by printing a warning when this happens. Signed-off-by: Alexandru Elisei --- arch/arm64/kvm/mmu.c | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 3ab8eba808ae..d66d89c18045 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1298,6 +1298,27 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) /* Userspace should not be able to register out-of-bounds IPAs */ VM_BUG_ON(fault_ipa >= kvm_phys_size(vcpu->kvm)); + if (memslot_is_locked(memslot)) { + const char *fault_type_str; + + if (kvm_vcpu_trap_is_exec_fault(vcpu)) + goto handle_fault; + + if (fault_status == FSC_ACCESS) + fault_type_str = "access"; + else if (write_fault && (memslot->arch.flags & KVM_MEMSLOT_LOCK_WRITE)) + fault_type_str = "write"; + else if (!write_fault) + fault_type_str = "read"; + else + goto handle_fault; + + kvm_warn_ratelimited("Unexpected L2 %s fault on locked memslot %d: IPA=%#llx, ESR_EL2=%#08x]\n", + fault_type_str, memslot->id, fault_ipa, + kvm_vcpu_get_esr(vcpu)); + } + +handle_fault: if (fault_status == FSC_ACCESS) { handle_access_fault(vcpu, fault_ipa); ret = 1;