From patchwork Thu Nov 19 16:25:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11918217 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77969C56201 for ; Thu, 19 Nov 2020 16:27:56 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E5E5C22261 for ; Thu, 19 Nov 2020 16:27:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="1Zp1TeHu"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="v5R/bPXr" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E5E5C22261 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=hzY57T3dvsrC78NpRfZeaZB/jDsq+qp4woAcZ5lFBcI=; b=1Zp1TeHuMZTbj32Ms02Ut2AD+ LHYNiC++vFeLOm4lDh2Loqi4hSe6Kit6sj3DxkTbCDbyp8tRpYbaaiJDboKmbhnnhPzt2WYnQgMSR Yzjz08mdY6GOAuZo9XPpku4jEvl6zFHd6njHbRwO9J0f+PzD6Ayv4m3gKJLrGln1oYtfa8DSKWE1L WE8gZCSh7tG+N5IFIy6ALRq/CSYl0T4CFz1yJDGxDa0wqRTYvLoIRTt707uxeq6IqvUZhq5ufX8Pb R0VSNY2pxOMucZJ8Dbaig6f1VNCystKIyE3YYJCpeQHaP/OPjvi4rpPIyRkrpDxUmtVDwpomKNYMJ jURhjGUSg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kfmlL-0004Ld-Vw; Thu, 19 Nov 2020 16:26:24 +0000 Received: from mail-ed1-x544.google.com ([2a00:1450:4864:20::544]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kfmky-0004EH-DN for linux-arm-kernel@lists.infradead.org; Thu, 19 Nov 2020 16:26:01 +0000 Received: by mail-ed1-x544.google.com with SMTP id m16so6480273edr.3 for ; Thu, 19 Nov 2020 08:26:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5MlXwXUoGcLbD/WCygRzjYFVebEXnf5c/lUquykfRUk=; b=v5R/bPXr6yxIRC4TnJkHhfGPg52KN0BTKGKdHosqzHDrdjOBD3OoLCLlGpvJ9vpZw7 FSXShWWfwIGiDhhnTl+oinATAyoscG+OFYVmfCCL7jzESU8qE2IxvVgkmVLPG+p7Bx3R UHN1++6yqap4jQ2jeBeVZGOXdQq7aM0yVjNTXgCuXg6gJvlN6rsVY29Lj4/EWjZiGus5 N7pIT1izOm/VG9jeByFsBKDqO1z2FtKo53LfYREwOu7hPbTk42Ytd61KbHMnKl3nx/Lb jLDGaZD/BWTfhIwijj1lfSbsElbzU4oc3GfhlicRYmkGh04JSc+aoeChsMyHNNWmj+9n MbSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5MlXwXUoGcLbD/WCygRzjYFVebEXnf5c/lUquykfRUk=; b=kaF8Y+UMlIpxYQhhFZrVJsW2CNYuYsueAjzb3aASpJMQIvjRAbqLBWKe9jpCpu9HHM sMfg713lU8QXhD/0NtMPhoHiewotKgl28RqfFJQRPpBy1GTTolUv/nCXxPMDKQuKRX+n l5xOTnO5SBWlgTPM/omu4qZfL9nPb3cDywUZbOl7lucAwg/n5XySm6luRlVjT4JSnuVt fAnq7Vf0XYfLmCV2m+qGpvQff33EFtbVpTn+jgs5b/89JBOLh1jxgB6Q2sdVi01/CxpQ Nz/JIILS21tZRIxqfIXblDsUj1NmM1dqyoaXIGZRh2YmAJIJ/BEdgTqusz7pm1AIpvJF U9Vg== X-Gm-Message-State: AOAM530uzccLhhYnhPzagLm7SOAeEn2yBYeGO7adOeRGKVMBC8jlkegX 2RidQ74LYJTRra+OVLfKyBoFuQ== X-Google-Smtp-Source: ABdhPJx9s7H/MTHz69bsGrNQ0l5Py2P1/7O779gADrD4sMkOQcjiC5QVePAr1gZhJn1q2f5rE9zzyA== X-Received: by 2002:a05:6402:19b4:: with SMTP id o20mr7995752edz.103.1605803159276; Thu, 19 Nov 2020 08:25:59 -0800 (PST) Received: from localhost ([2a01:4b00:8523:2d03:9843:cd3f:f36b:d55c]) by smtp.gmail.com with ESMTPSA id g7sm15689057edl.5.2020.11.19.08.25.57 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 19 Nov 2020 08:25:58 -0800 (PST) From: David Brazdil To: kvmarm@lists.cs.columbia.edu Subject: [RFC PATCH 5/6] kvm: arm64: Fix constant-pool users in hyp Date: Thu, 19 Nov 2020 16:25:42 +0000 Message-Id: <20201119162543.78001-6-dbrazdil@google.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201119162543.78001-1-dbrazdil@google.com> References: <20201119162543.78001-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201119_112600_565070_15CDBC49 X-CRM114-Status: GOOD ( 20.32 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , kernel-team@android.com, Suzuki K Poulose , Marc Zyngier , linux-kernel@vger.kernel.org, James Morse , linux-arm-kernel@lists.infradead.org, Catalin Marinas , David Brazdil , Will Deacon , Ard Biesheuvel , Julien Thierry , Andrew Scull Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Hyp code used to use absolute addressing via a constant pool to obtain the kernel VA of 3 symbols - panic, __hyp_panic_string and __kvm_handle_stub_hvc. This used to work because the kernel would relocate the addresses in the constant pool to kernel VA at boot and hyp would simply load them from there. Now that relocations are fixed up to point to hyp VAs, this does not work any longer. Rework the helpers to convert hyp VA to kernel VA / PA as needed. Signed-off-by: David Brazdil --- arch/arm64/include/asm/kvm_mmu.h | 29 +++++++++++++++++++---------- arch/arm64/kvm/hyp/nvhe/host.S | 29 +++++++++++++++-------------- 2 files changed, 34 insertions(+), 24 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 8cb8974ec9cc..0676ff2105bb 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -72,9 +72,14 @@ alternative_cb kvm_update_va_mask alternative_cb_end .endm +.macro hyp_pa reg, tmp + ldr_l \tmp, hyp_physvirt_offset + add \reg, \reg, \tmp +.endm + /* - * Convert a kernel image address to a PA - * reg: kernel address to be converted in place + * Convert a hypervisor VA to a kernel image address + * reg: hypervisor address to be converted in place * tmp: temporary register * * The actual code generation takes place in kvm_get_kimage_voffset, and @@ -82,18 +87,22 @@ alternative_cb_end * perform the register allocation (kvm_get_kimage_voffset uses the * specific registers encoded in the instructions). */ -.macro kimg_pa reg, tmp +.macro hyp_kimg reg, tmp + /* Convert hyp VA -> PA. */ + hyp_pa \reg, \tmp + + /* Load kimage_voffset. */ alternative_cb kvm_get_kimage_voffset - movz \tmp, #0 - movk \tmp, #0, lsl #16 - movk \tmp, #0, lsl #32 - movk \tmp, #0, lsl #48 + movz \tmp, #0 + movk \tmp, #0, lsl #16 + movk \tmp, #0, lsl #32 + movk \tmp, #0, lsl #48 alternative_cb_end - /* reg = __pa(reg) */ - sub \reg, \reg, \tmp + /* Convert PA -> kimg VA. */ + add \reg, \reg, \tmp .endm - + #else #include diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S index 596dd5ae8e77..bcb80d525d8c 100644 --- a/arch/arm64/kvm/hyp/nvhe/host.S +++ b/arch/arm64/kvm/hyp/nvhe/host.S @@ -74,27 +74,28 @@ SYM_FUNC_END(__host_enter) * void __noreturn __hyp_do_panic(bool restore_host, u64 spsr, u64 elr, u64 par); */ SYM_FUNC_START(__hyp_do_panic) - /* Load the format arguments into x1-7 */ - mov x6, x3 - get_vcpu_ptr x7, x3 - - mrs x3, esr_el2 - mrs x4, far_el2 - mrs x5, hpfar_el2 - /* Prepare and exit to the host's panic funciton. */ mov lr, #(PSR_F_BIT | PSR_I_BIT | PSR_A_BIT | PSR_D_BIT |\ PSR_MODE_EL1h) msr spsr_el2, lr ldr lr, =panic + hyp_kimg lr, x6 msr elr_el2, lr - /* - * Set the panic format string and enter the host, conditionally - * restoring the host context. - */ + /* Set the panic format string. Use the, now free, LR as scratch. */ + ldr lr, =__hyp_panic_string + hyp_kimg lr, x6 + + /* Load the format arguments into x1-7. */ + mov x6, x3 + get_vcpu_ptr x7, x3 + mrs x3, esr_el2 + mrs x4, far_el2 + mrs x5, hpfar_el2 + + /* Enter the host, conditionally restoring the host context. */ cmp x0, xzr - ldr x0, =__hyp_panic_string + mov x0, lr b.eq __host_enter_without_restoring b __host_enter_for_panic SYM_FUNC_END(__hyp_do_panic) @@ -124,7 +125,7 @@ SYM_FUNC_END(__hyp_do_panic) * Preserve x0-x4, which may contain stub parameters. */ ldr x5, =__kvm_handle_stub_hvc - kimg_pa x5, x6 + hyp_pa x5, x6 br x5 .L__vect_end\@: .if ((.L__vect_end\@ - .L__vect_start\@) > 0x80)