From patchwork Tue Nov 17 18:15:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 11913155 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50891C2D0E4 for ; Tue, 17 Nov 2020 18:19:16 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D1DBD221FD for ; Tue, 17 Nov 2020 18:19:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Lpkezyi3"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="m4MI8sDp" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D1DBD221FD Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version:Message-Id: In-Reply-To:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=7pbcgNeCMqY1DreG27J2+hnJrRdl9r5rd6DyE1hns8Q=; b=Lpkezyi389GJyrF/F1POTCnRU oXugegf8sb2ag5/oiySCQ/MV1953RM9edeoLd77wWCbKJpjmRO60jGfmkk9VIIZIEOiFGeWB+WT0m 7ddOMNG3/3M2wyvk/IaXAeO4m7klvIRxsisolMrAWHpZ9Po50PnQljDs90WLZBwKzm9TAFYF5hlIe rs+gRzzkaCAXWmEHOMAqvpawOGSh2Xj6Up2/K2Zl7Db1zUk4r54jfI9DaycTKhJGLRux0r3DLbiQU CY5sVFXjc8whFd3j5MERiyczBGwHvPp+iRY0spGEEXti3k1TxoD9Jx4oaWSILhw/Yi8LbIL1ZcdzX j4Ty7Mnzw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kf5Xp-0002Y3-4g; Tue, 17 Nov 2020 18:17:33 +0000 Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kf5Wn-0002Bc-Mc for linux-arm-kernel@lists.infradead.org; Tue, 17 Nov 2020 18:16:33 +0000 Received: by mail-wm1-x349.google.com with SMTP id u9so1907584wmb.2 for ; Tue, 17 Nov 2020 10:16:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=NKT870NUlC/fR/KT4iQDh391rPBCM0t1P6CT6nmxduk=; b=m4MI8sDpmwsqB62UcR7cNmGEWAr1ts32ZyLwXA1tnB9t+yJAMMGil4BHA5iGFJ7isM umGqL0u0iLk+qAebFVa2/UT/SQWeeh7tni7O556SIB6aDKkpJxCYHRGUoUB/25AJyogM m0GtNJ9pve1RrMyvX1nGKvXZMC5UI2L2akL8ld7Q4/y+8HqxmmV9yUUCBjG3B1alizqu 8zjAts8U7BaAYdyVNKihGgQdLMb6oQFGE2FpINJY51qA6o8yP59LmhW7Ib9PbyrhYwXD MkzMfv1BcHdsGpSFaKYYevM+SLgZoP9bPFgKtRVWF2iuGJDAmZDeIv19lHS+uvxwOZfN +4Tg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=NKT870NUlC/fR/KT4iQDh391rPBCM0t1P6CT6nmxduk=; b=tPAWU4MMKxUBGdD1iKZRkKvCh5TrUylNTI2JorhXvpRdPZUelMLGyfvhtm8eZlzAkY x/H/LGYh6K8WXureZu5ME8vl2xU4PfkQu5L5CcSpiy9YgHyTPGxcDF5jgSL9VYCMqC98 DMGxXJfSaRXw3M4pW111Gvl9tC7QLzSJJMeIr1xD8bRzWSaEQLTvVNU/Ze6eGEnczNcS sChnf1URRSbR3TKTbRMBZvp4WNe8mcEUIpmeap+v/p4/kTsWCFmyzX6bGHVF9Ungw2mI uKnhK9CT2aAsIDlEYE06A5lCfJse8gFZ0AeHwyPnRux/E8hgWO3o3FJwIey1zgREO5yh qM0A== X-Gm-Message-State: AOAM533N7w103jrR2aVvLCM2dG7VueqKtlWf4V2f4R0iue0BiRJJrZH0 F4SWHbBYBjIMhYy1ZsQbwL3C1ysIA8uL X-Google-Smtp-Source: ABdhPJwaipw8Yg+hF0IOY//Mk8QfB1T5Et/1Qnq5qCttJhd99HX+QQmEaSPNLhuQ7z/N6ynjdi4lLhEzca6o X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:f693:9fff:fef4:a7ef]) (user=qperret job=sendgmr) by 2002:a1c:bc08:: with SMTP id m8mr360694wmf.137.1605636986141; Tue, 17 Nov 2020 10:16:26 -0800 (PST) Date: Tue, 17 Nov 2020 18:15:43 +0000 In-Reply-To: <20201117181607.1761516-1-qperret@google.com> Message-Id: <20201117181607.1761516-4-qperret@google.com> Mime-Version: 1.0 References: <20201117181607.1761516-1-qperret@google.com> X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog Subject: [RFC PATCH 03/27] KVM: arm64: Add standalone ticket spinlock implementation for use at hyp From: Quentin Perret To: Catalin Marinas , Will Deacon , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Rob Herring , Frank Rowand X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201117_131629_907933_38F2D633 X-CRM114-Status: GOOD ( 16.98 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE" , Quentin Perret , android-kvm@google.com, open list , kernel-team@android.com, "open list:KERNEL VIRTUAL MACHINE FOR ARM64 \(KVM/arm64\)" , "moderated list:ARM64 PORT \(AARCH64 ARCHITECTURE\)" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Will Deacon We will soon need to synchronise multiple CPUs in the hyp text at EL2. The qspinlock-based locking used by the host is overkill for this purpose and requires a working "percpu" implementation for the MCS nodes. Implement a simple ticket locking scheme based heavily on the code removed by c11090474d70 ("arm64: locking: Replace ticket lock implementation with qspinlock"). [ qperret: removed the __KVM_NVHE_HYPERVISOR__ build-time check from spinlock.h ] Signed-off-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/spinlock.h | 95 ++++++++++++++++++++++ arch/arm64/kvm/hyp/include/nvhe/util.h | 25 ++++++ 2 files changed, 120 insertions(+) create mode 100644 arch/arm64/kvm/hyp/include/nvhe/spinlock.h create mode 100644 arch/arm64/kvm/hyp/include/nvhe/util.h diff --git a/arch/arm64/kvm/hyp/include/nvhe/spinlock.h b/arch/arm64/kvm/hyp/include/nvhe/spinlock.h new file mode 100644 index 000000000000..bbfe2cbd9f62 --- /dev/null +++ b/arch/arm64/kvm/hyp/include/nvhe/spinlock.h @@ -0,0 +1,95 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * A stand-alone ticket spinlock implementation, primarily for use by the + * non-VHE hypervisor code running at EL2. + * + * Copyright (C) 2020 Google LLC + * Author: Will Deacon + * + * Heavily based on the implementation removed by c11090474d70 which was: + * Copyright (C) 2012 ARM Ltd. + */ + +#ifndef __ARM64_KVM_HYP_SPINLOCK_H__ +#define __ARM64_KVM_HYP_SPINLOCK_H__ + +#include + +typedef union hyp_spinlock { + u32 __val; + struct { +#ifdef __AARCH64EB__ + u16 next, owner; +#else + u16 owner, next; +#endif + }; +} hyp_spinlock_t; + +#define hyp_spin_lock_init(l) \ +do { \ + *(l) = (hyp_spinlock_t){ .__val = 0 }; \ +} while (0) + +static inline void hyp_spin_lock(hyp_spinlock_t *lock) +{ + u32 tmp; + hyp_spinlock_t lockval, newval; + + asm volatile( + /* Atomically increment the next ticket. */ + ALTERNATIVE( + /* LL/SC */ +" prfm pstl1strm, %3\n" +"1: ldaxr %w0, %3\n" +" add %w1, %w0, #(1 << 16)\n" +" stxr %w2, %w1, %3\n" +" cbnz %w2, 1b\n", + /* LSE atomics */ +" .arch_extension lse\n" +" mov %w2, #(1 << 16)\n" +" ldadda %w2, %w0, %3\n" + __nops(3), + ARM64_HAS_LSE_ATOMICS) + + /* Did we get the lock? */ +" eor %w1, %w0, %w0, ror #16\n" +" cbz %w1, 3f\n" + /* + * No: spin on the owner. Send a local event to avoid missing an + * unlock before the exclusive load. + */ +" sevl\n" +"2: wfe\n" +" ldaxrh %w2, %4\n" +" eor %w1, %w2, %w0, lsr #16\n" +" cbnz %w1, 2b\n" + /* We got the lock. Critical section starts here. */ +"3:" + : "=&r" (lockval), "=&r" (newval), "=&r" (tmp), "+Q" (*lock) + : "Q" (lock->owner) + : "memory"); +} + +static inline void hyp_spin_unlock(hyp_spinlock_t *lock) +{ + u64 tmp; + + asm volatile( + ALTERNATIVE( + /* LL/SC */ + " ldrh %w1, %0\n" + " add %w1, %w1, #1\n" + " stlrh %w1, %0", + /* LSE atomics */ + " .arch_extension lse\n" + " mov %w1, #1\n" + " staddlh %w1, %0\n" + __nops(1), + ARM64_HAS_LSE_ATOMICS) + : "=Q" (lock->owner), "=&r" (tmp) + : + : "memory"); +} + +#endif /* __ARM64_KVM_HYP_SPINLOCK_H__ */ diff --git a/arch/arm64/kvm/hyp/include/nvhe/util.h b/arch/arm64/kvm/hyp/include/nvhe/util.h new file mode 100644 index 000000000000..9c58cc436a83 --- /dev/null +++ b/arch/arm64/kvm/hyp/include/nvhe/util.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Standalone re-implementations of kernel interfaces for use at EL2. + * Copyright (C) 2020 Google LLC + * Author: Will Deacon + */ + +#ifndef __KVM_NVHE_HYPERVISOR__ +#error "Attempt to include nVHE code outside of EL2 object" +#endif + +#ifndef __ARM64_KVM_NVHE_UTIL_H__ +#define __ARM64_KVM_NVHE_UTIL_H__ + +/* Locking (hyp_spinlock_t) */ +#include + +#undef spin_lock_init +#define spin_lock_init hyp_spin_lock_init +#undef spin_lock +#define spin_lock hyp_spin_lock +#undef spin_unlock +#define spin_unlock hyp_spin_unlock + +#endif /* __ARM64_KVM_NVHE_UTIL_H__ */