From patchwork Wed Nov 4 18:36:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Brazdil X-Patchwork-Id: 11881899 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80BB1C2D0A3 for ; Wed, 4 Nov 2020 18:40:34 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 010D72067C for ; Wed, 4 Nov 2020 18:40:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="rpz55vDC"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="Hkql7qgZ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 010D72067C Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=50+BmwtR+TbdMHRWJ28AHrJyiYHOygaMvrw5HGJpNkw=; b=rpz55vDCWaCghtPq55vxp+1iL aqHbUgXF35hoexrLE5Y3XlAs5hzWBJB40FX3AHdRe5z4ucvKhZCcea8TSQfTfy5gzQxit5sAF/d19 vMXbNkf3ubbiCRktUbOwGvfz4IcAbXTjvu6V4ynheFmzVm7H1cBTvATd288ngubZarWS0E0vPMukZ 6dGiYU9uQi3nsFSfLxLxqQZRilSKrzRmVCVhsxwbfPqGhxBr5RIu1cfmkJ7Wrp8TuBj0HO1+3NJnE y7TDtx8lgiq8vtV0TCoZXUxSlKT/MQX2KaR6ZIwPKICKLfxovyOL4V8niFaieAeIOoCkzRP5iwge6 agc7izQXw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kaNg7-0002bQ-Cv; Wed, 04 Nov 2020 18:38:39 +0000 Received: from mail-wm1-x342.google.com ([2a00:1450:4864:20::342]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kaNeh-0001od-8T for linux-arm-kernel@lists.infradead.org; Wed, 04 Nov 2020 18:37:13 +0000 Received: by mail-wm1-x342.google.com with SMTP id d142so3350700wmd.4 for ; Wed, 04 Nov 2020 10:37:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=IMhhhU6UDnCZic4U0v9mlSCvJc9/cDfep6l/EJNs2R8=; b=Hkql7qgZIh094H5VuYpSIVjy6kzgeDA8ZQgltyGOpECDIh8BxbtQ1vC2IP76Y8D/SW 26fZ8diz3rjUlyZpgj+lmnh+rbYfjbMuzRqgpNGGA9X3yiyb6w6ST9GX+8zgdximBtLq LPzGXkiCl821fIRBlBt/yFj2EEHQXAkFtg4HenFUa6IyY7c/084XL0mVBsEwNoTqcnMv eiLz/WnUFvb4/K1mS6CckAIwJqhddN3kU7pWeQR9kXwxCmAz6AK4aPP6iJG6VFR61m9v bk4NZXV6Qkoh7xR9BqCaqTfXmbEwRJirP5X7z2dnIfwK5n74woEhG7QE9IG/BCgsaWn/ p3Hw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=IMhhhU6UDnCZic4U0v9mlSCvJc9/cDfep6l/EJNs2R8=; b=bZuqDsGSES22SrvY7oEPkhbV4WXTkpZ0fOzHhwavj8HC+ms+euL6qS2vStF2fI+6KQ oag7EDB0njYX4lNBKmkryvSiOEsSk7xXUxj/Cuf7ao0pa+KuE69sXr1kqN42el2rk6ZU Iq0CWb1PReZHnizEqoR2H5wdBnfjiuIKFhF4mgYdhRLIn/CtZh89ZJ75oLmX2OwYQwND YpWoy8M8RYWMkYlXzLkN9VhD7+eXF8ZbK/jkAn7grrsTM9nCpziInGb5HZdXEBfwdsnp Rs9MDT03iA7icN6BelxnDUq76/taIvMRcy90PFvZSz00tGT2LhEyfyCUFjfJMBi+rY+2 5rJQ== X-Gm-Message-State: AOAM530afk8+7mnCpNLbuTBGjXn9nx1cJ/MfGtkzpKsN1EAO5i3oIfPX Q4GpSHr5N5ddzxJDbxgAv9wTFg== X-Google-Smtp-Source: ABdhPJxt0mB9KgegrPj+SMz071z9EVqKZNk74keSqXTEW6yH2qJ021T+2wo2kFQ+H/1RCbQ7PtqAxA== X-Received: by 2002:a7b:cc0e:: with SMTP id f14mr5978436wmh.92.1604515030088; Wed, 04 Nov 2020 10:37:10 -0800 (PST) Received: from localhost ([2a01:4b00:8523:2d03:c8d2:30f2:53c6:bc2]) by smtp.gmail.com with ESMTPSA id w186sm3078822wmb.26.2020.11.04.10.37.08 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 04 Nov 2020 10:37:09 -0800 (PST) From: David Brazdil To: kvmarm@lists.cs.columbia.edu Subject: [RFC PATCH 15/26] arm64: kvm: Add standalone ticket spinlock implementation for use at hyp Date: Wed, 4 Nov 2020 18:36:19 +0000 Message-Id: <20201104183630.27513-16-dbrazdil@google.com> X-Mailer: git-send-email 2.29.1 In-Reply-To: <20201104183630.27513-1-dbrazdil@google.com> References: <20201104183630.27513-1-dbrazdil@google.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201104_133711_427866_3DB6FF8C X-CRM114-Status: GOOD ( 20.40 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Lorenzo Pieralisi , kernel-team@android.com, Suzuki K Poulose , Marc Zyngier , Quentin Perret , linux-kernel@vger.kernel.org, James Morse , linux-arm-kernel@lists.infradead.org, Catalin Marinas , Tejun Heo , Dennis Zhou , Christoph Lameter , David Brazdil , Will Deacon , Julien Thierry , Andrew Scull Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Will Deacon We will soon need to synchronise multiple CPUs in the hyp text at EL2. The qspinlock-based locking used by the host is overkill for this purpose and relies on the kernel's "percpu" implementation for the MCS nodes. Implement a simple ticket locking scheme based heavily on the code removed by commit c11090474d70 ("arm64: locking: Replace ticket lock implementation with qspinlock"). Signed-off-by: Will Deacon Signed-off-by: David Brazdil --- arch/arm64/kvm/hyp/include/nvhe/spinlock.h | 96 ++++++++++++++++++++++ 1 file changed, 96 insertions(+) create mode 100644 arch/arm64/kvm/hyp/include/nvhe/spinlock.h diff --git a/arch/arm64/kvm/hyp/include/nvhe/spinlock.h b/arch/arm64/kvm/hyp/include/nvhe/spinlock.h new file mode 100644 index 000000000000..dc0397e5b5f2 --- /dev/null +++ b/arch/arm64/kvm/hyp/include/nvhe/spinlock.h @@ -0,0 +1,96 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * A stand-alone ticket spinlock implementation for use by the non-VHE + * KVM hypervisor code running at EL2. + * + * Copyright (C) 2020 Google LLC + * Author: Will Deacon + * + * Heavily based on the implementation removed by c11090474d70 which was: + * Copyright (C) 2012 ARM Ltd. + */ + +#ifndef __KVM_NVHE_HYPERVISOR__ +#error "Attempt to include nVHE code outside of EL2 object" +#endif + +#ifndef __ARM64_KVM_NVHE_SPINLOCK_H__ +#define __ARM64_KVM_NVHE_SPINLOCK_H__ + +#include +#include + +typedef union hyp_spinlock { + u32 __val; + struct { +#ifdef __AARCH64EB__ + u16 next, owner; +#else + u16 owner, next; + }; +#endif +} hyp_spinlock_t; + +#define hyp_spin_lock_init(l) \ +do { \ + *(l) = (hyp_spinlock_t){ .__val = 0 }; \ +} while (0) + +static inline void hyp_spin_lock(hyp_spinlock_t *lock) +{ + u32 tmp; + hyp_spinlock_t lockval, newval; + + asm volatile( + /* Atomically increment the next ticket. */ + ARM64_LSE_ATOMIC_INSN( + /* LL/SC */ +" prfm pstl1strm, %3\n" +"1: ldaxr %w0, %3\n" +" add %w1, %w0, #(1 << 16)\n" +" stxr %w2, %w1, %3\n" +" cbnz %w2, 1b\n", + /* LSE atomics */ +" mov %w2, #(1 << 16)\n" +" ldadda %w2, %w0, %3\n" + __nops(3)) + + /* Did we get the lock? */ +" eor %w1, %w0, %w0, ror #16\n" +" cbz %w1, 3f\n" + /* + * No: spin on the owner. Send a local event to avoid missing an + * unlock before the exclusive load. + */ +" sevl\n" +"2: wfe\n" +" ldaxrh %w2, %4\n" +" eor %w1, %w2, %w0, lsr #16\n" +" cbnz %w1, 2b\n" + /* We got the lock. Critical section starts here. */ +"3:" + : "=&r" (lockval), "=&r" (newval), "=&r" (tmp), "+Q" (*lock) + : "Q" (lock->owner) + : "memory"); +} + +static inline void hyp_spin_unlock(hyp_spinlock_t *lock) +{ + u64 tmp; + + asm volatile( + ARM64_LSE_ATOMIC_INSN( + /* LL/SC */ + " ldrh %w1, %0\n" + " add %w1, %w1, #1\n" + " stlrh %w1, %0", + /* LSE atomics */ + " mov %w1, #1\n" + " staddlh %w1, %0\n" + __nops(1)) + : "=Q" (lock->owner), "=&r" (tmp) + : + : "memory"); +} + +#endif /* __ARM64_KVM_NVHE_SPINLOCK_H__ */