From patchwork Thu Dec 9 15:09:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Woodhouse X-Patchwork-Id: 12666709 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2134CC433EF for ; Thu, 9 Dec 2021 15:10:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239330AbhLIPNn (ORCPT ); Thu, 9 Dec 2021 10:13:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42994 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239280AbhLIPNk (ORCPT ); Thu, 9 Dec 2021 10:13:40 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D58B3C0617A1; Thu, 9 Dec 2021 07:10:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Sender:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description; bh=iwFC5dCU7PYeSu7YmExQWEMI/4v4Pyx235oHOTTX2kE=; b=s4gxjtfFGCmrg6bsuzBnXx1QMr PlzAjnAo6MHSf8f7hDW93jVzmKoq7abnl0PbvUvPfPYQLiYQuzt8Ikjxkn+WNISuTO5kbPnBHQxIT 6ZfkxAowJBHhPV5ZtEYyt5nFWeqedz6A2qTTvFVMk3lP4zRtwMPXtgjA658jgkj4tTyJ73FCfPVSK OiD6vRpqB62YP+eBzDwwGlwscEJZvdqxxyX/fDVs4lYW7VHHo/7P5Lw/ZM0YUOgaNPmqguvp91U5F Z/vIvgKp15yj7K/wKTAH2izOoXSt+iL3fZIH7nN/h3fMBFOoM7X7383wKWO1ogHnzKE9KU8yFIeNG xWBxmIjA==; Received: from i7.infradead.org ([2001:8b0:10b:1:21e:67ff:fecb:7a92]) by casper.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1mvL3J-009Rs6-Fa; Thu, 09 Dec 2021 15:09:46 +0000 Received: from dwoodhou by i7.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mvL3J-0000y3-Mo; Thu, 09 Dec 2021 15:09:45 +0000 From: David Woodhouse To: Thomas Gleixner Cc: Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Paolo Bonzini , "Paul E. McKenney" , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, rcu@vger.kernel.org, mimoja@mimoja.de, hewenliang4@huawei.com, hushiyuan@huawei.com, luolongjun@huawei.com, hejingxian@huawei.com Subject: [PATCH 05/11] x86/smpboot: Reference count on smpboot_setup_warm_reset_vector() Date: Thu, 9 Dec 2021 15:09:32 +0000 Message-Id: <20211209150938.3518-6-dwmw2@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211209150938.3518-1-dwmw2@infradead.org> References: <20211209150938.3518-1-dwmw2@infradead.org> MIME-Version: 1.0 Sender: David Woodhouse X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: David Woodhouse If we want to do parallel CPU bringup, we're going to need to set this up and leave it until all CPUs are done. Might as well use the RTC spinlock to protect the refcount, as we need to take it anyway. Signed-off-by: David Woodhouse --- arch/x86/kernel/smpboot.c | 23 +++++++++++++++-------- 1 file changed, 15 insertions(+), 8 deletions(-) diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c index ac2909f0cab3..99c705935f94 100644 --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -127,17 +127,22 @@ int arch_update_cpu_topology(void) return retval; } + +static unsigned int smpboot_warm_reset_vector_count; + static inline void smpboot_setup_warm_reset_vector(unsigned long start_eip) { unsigned long flags; spin_lock_irqsave(&rtc_lock, flags); - CMOS_WRITE(0xa, 0xf); + if (!smpboot_warm_reset_vector_count++) { + CMOS_WRITE(0xa, 0xf); + *((volatile unsigned short *)phys_to_virt(TRAMPOLINE_PHYS_HIGH)) = + start_eip >> 4; + *((volatile unsigned short *)phys_to_virt(TRAMPOLINE_PHYS_LOW)) = + start_eip & 0xf; + } spin_unlock_irqrestore(&rtc_lock, flags); - *((volatile unsigned short *)phys_to_virt(TRAMPOLINE_PHYS_HIGH)) = - start_eip >> 4; - *((volatile unsigned short *)phys_to_virt(TRAMPOLINE_PHYS_LOW)) = - start_eip & 0xf; } static inline void smpboot_restore_warm_reset_vector(void) @@ -149,10 +154,12 @@ static inline void smpboot_restore_warm_reset_vector(void) * to default values. */ spin_lock_irqsave(&rtc_lock, flags); - CMOS_WRITE(0, 0xf); - spin_unlock_irqrestore(&rtc_lock, flags); + if (!--smpboot_warm_reset_vector_count) { + CMOS_WRITE(0, 0xf); - *((volatile u32 *)phys_to_virt(TRAMPOLINE_PHYS_LOW)) = 0; + *((volatile u32 *)phys_to_virt(TRAMPOLINE_PHYS_LOW)) = 0; + } + spin_unlock_irqrestore(&rtc_lock, flags); } static void init_freq_invariance(bool secondary, bool cppc_ready);