From patchwork Thu Nov 19 16:25:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11918215 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A83E1C63697 for ; Thu, 19 Nov 2020 16:27:28 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2A5AA22240 for ; Thu, 19 Nov 2020 16:27:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="t4jpP2R4"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="VF2/Zvdb" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2A5AA22240 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=kUlkO64AYKBI13w3NU5eXAwNeYi9o39FOX+kIfzOqNg=; b=t4jpP2R48TMz45Dq5n24I8Nz2 yKjKxcvPrysyFo0nQCHan1MGCM4+GPJrNl03SY4YejXBFY5Zw/9Jej3m7pZio7PvOcPre1/bK5h/p fkHc/rxIuZPwurZU+8aiG2Zu90IFZcHjqztJTZt0+PxNZg1RYS8YPVxgl3eGGsa8LqQl7785AIu0U ichqhNW/94zgcxpia10/LZ6kUe2ZZ9dPwHjAZ56LNMF2A6BMTxEv7eArrh3KYyM2EpvwXQ8ZKRATg s7mum/aGtDU4CgzVfT4vv/Gzg5AmrajG9iWXtSko887hMD7vwPdQh3v0IS3nkx/CGJETbJaXyhKpq 5j56043mQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kfmky-0004EA-Bs; Thu, 19 Nov 2020 16:26:00 +0000 Received: from mail-ej1-x642.google.com ([2a00:1450:4864:20::642]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kfmkr-0004CR-No for linux-arm-kernel@lists.infradead.org; Thu, 19 Nov 2020 16:25:54 +0000 Received: by mail-ej1-x642.google.com with SMTP id f23so8776827ejk.2 for ; Thu, 19 Nov 2020 08:25:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TT/5iT0z6kx2joICZ1NAOPL4bFSWCCwquQqtGIEaOSE=; b=VF2/ZvdbNQRSshIgrCaZnLi+OohPfeVvaLrNMfYhzLvwIdDhd8kH2e7s3w8TeJhyJt lqiGFQq2fh2RuHE47qpHCqlUAPJnELkLqqFjbV1TAULxG1k3twib5U4GDLOVmUuFZdNE lV0AVoJiA3v/889w4eJeTYP0bfqNk3g2YfkQjD0TiZFzgKTDmksWN3EMIfxSf5W3hukh Vv+VXQfRXy4bqlxBRRGXxmbn9c6de8SL+IJSqGfQQUDMTTCNJK9rvgI4R+4KJESvTPH1 pDOjgFd/mVA2LPoFBXWBoewdcHQ3fyYW6RTLEar9cm+xIjSM4pQJKTFkZq0Oxq4agZ9N C4oQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TT/5iT0z6kx2joICZ1NAOPL4bFSWCCwquQqtGIEaOSE=; b=f0sIDG82dPLqyXlMmDOpPQL5rX51l9LqMVOXpFYAgEe4XPO9WRpqDSe21zibmDVccc BtuB2fGnFIZd/wdIOQGGNSqPytfsvEZWngB16wgjjKwfq7Xzna3r3S/oUBIYLQ4I6CHg vIgRdpSyA77Ll3oTcSeNPzaG2V4m13vzY8oJwmpAqHnJh4yFcaG/B0ioAS9inuCHDpVN 0ioj3lFT1SErdafOO0mennOJFA5ziS//ssTPwbHL/TI/OCg+nt8fS/RCJ7wox2/1mSsg 9t1S0iQWbIW+vNnviFlwTqIgqGGNDoKrDm/oRtMweQ8OvlZC3nNzYYBSNO0xYDtCZ/tX SFvA== X-Gm-Message-State: AOAM532mZ8pQWX5feP3h/7tmWVt5Lr95ZqNyBJUfhqZv1w3zpDpXQz7h RcpVGp34unNRA1NEyR/VbUeScA== X-Google-Smtp-Source: ABdhPJwP/VVLCtMC4MEEj1Tf0BE9yqBQPNpM9ZeLFkujWHC85kdVBcEDsd4LiFcCh9qpW7yuGVSGQQ== X-Received: by 2002:a17:906:b0d8:: with SMTP id bk24mr17537665ejb.113.1605803152633; Thu, 19 Nov 2020 08:25:52 -0800 (PST) Received: from localhost ([2a01:4b00:8523:2d03:9843:cd3f:f36b:d55c]) by smtp.gmail.com with ESMTPSA id lu33sm73443ejb.49.2020.11.19.08.25.50 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 19 Nov 2020 08:25:51 -0800 (PST) From: David Brazdil To: kvmarm@lists.cs.columbia.edu Subject: [RFC PATCH 2/6] kvm: arm64: Fix up RELA relocations in hyp code/data Date: Thu, 19 Nov 2020 16:25:39 +0000 Message-Id: <20201119162543.78001-3-dbrazdil@google.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201119162543.78001-1-dbrazdil@google.com> References: <20201119162543.78001-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201119_112553_853618_EA664E01 X-CRM114-Status: GOOD ( 25.96 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , kernel-team@android.com, Suzuki K Poulose , Marc Zyngier , linux-kernel@vger.kernel.org, James Morse , linux-arm-kernel@lists.infradead.org, Catalin Marinas , David Brazdil , Will Deacon , Ard Biesheuvel , Julien Thierry , Andrew Scull Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org KVM nVHE code runs under a different VA mapping than the kernel, hence so far it relied only on PC-relative addressing to avoid accidentally using a relocated kernel VA from a constant pool (see hyp_symbol_addr). So as to reduce the possibility of a programmer error, fixup the relocated addresses instead. Let the kernel relocate them to kernel VA first, but then iterate over them again, filter those that point to hyp code/data and convert the kernel VA to hyp VA. This is done after kvm_compute_layout and before apply_alternatives. Signed-off-by: David Brazdil --- arch/arm64/include/asm/kvm_mmu.h | 1 + arch/arm64/kernel/smp.c | 4 +- arch/arm64/kvm/va_layout.c | 76 ++++++++++++++++++++++++++++++++ 3 files changed, 80 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 5168a0c516ae..e5226f7e4732 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -105,6 +105,7 @@ alternative_cb_end void kvm_update_va_mask(struct alt_instr *alt, __le32 *origptr, __le32 *updptr, int nr_inst); void kvm_compute_layout(void); +void kvm_fixup_hyp_relocations(void); static __always_inline unsigned long __kern_hyp_va(unsigned long v) { diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index 18e9727d3f64..30241afc2c93 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -434,8 +434,10 @@ static void __init hyp_mode_check(void) "CPU: CPUs started in inconsistent modes"); else pr_info("CPU: All CPU(s) started at EL1\n"); - if (IS_ENABLED(CONFIG_KVM)) + if (IS_ENABLED(CONFIG_KVM)) { kvm_compute_layout(); + kvm_fixup_hyp_relocations(); + } } void __init smp_cpus_done(unsigned int max_cpus) diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c index d8cc51bd60bf..b80fab974896 100644 --- a/arch/arm64/kvm/va_layout.c +++ b/arch/arm64/kvm/va_layout.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include @@ -82,6 +83,81 @@ __init void kvm_compute_layout(void) init_hyp_physvirt_offset(); } +#define __load_elf_u64(s) \ + ({ \ + extern u64 s; \ + u64 val; \ + \ + asm ("ldr %0, =%1" : "=r"(val) : "S"(&s)); \ + val; \ + }) + +static bool __is_within_bounds(u64 addr, char *start, char *end) +{ + return start <= (char*)addr && (char*)addr < end; +} + +static bool __is_in_hyp_section(u64 addr) +{ + return __is_within_bounds(addr, __hyp_text_start, __hyp_text_end) || + __is_within_bounds(addr, __hyp_rodata_start, __hyp_rodata_end) || + __is_within_bounds(addr, + CHOOSE_NVHE_SYM(__per_cpu_start), + CHOOSE_NVHE_SYM(__per_cpu_end)); +} + +static void __fixup_hyp_rel(u64 addr) +{ + u64 *ptr, kern_va, hyp_va; + + /* Adjust the relocation address taken from ELF for KASLR. */ + addr += kaslr_offset(); + + /* Skip addresses not in any of the hyp sections. */ + if (!__is_in_hyp_section(addr)) + return; + + /* Get the LM alias of the relocation address. */ + ptr = (u64*)kvm_ksym_ref((void*)addr); + + /* + * Read the value at the relocation address. It has already been + * relocated to the actual kernel kimg VA. + */ + kern_va = (u64)kvm_ksym_ref((void*)*ptr); + + /* Convert to hyp VA. */ + hyp_va = __early_kern_hyp_va(kern_va); + + /* Store hyp VA at the relocation address. */ + *ptr = __early_kern_hyp_va(kern_va); +} + +static void __fixup_hyp_rela(void) +{ + Elf64_Rela *rel; + size_t i, n; + + rel = (Elf64_Rela*)(kimage_vaddr + __load_elf_u64(__rela_offset)); + n = __load_elf_u64(__rela_size) / sizeof(*rel); + + for (i = 0; i < n; ++i) + __fixup_hyp_rel(rel[i].r_offset); +} + +/* + * The kernel relocated pointers to kernel VA. Iterate over relocations in + * the hypervisor ELF sections and convert them to hyp VA. This avoids the + * need to only use PC-relative addressing in hyp. + */ +__init void kvm_fixup_hyp_relocations(void) +{ + if (!IS_ENABLED(CONFIG_RELOCATABLE) || has_vhe()) + return; + + __fixup_hyp_rela(); +} + static u32 compute_instruction(int n, u32 rd, u32 rn) { u32 insn = AARCH64_BREAK_FAULT;