From patchwork Fri May 12 21:07:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 13239804 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58AF4C7EE3A for ; Fri, 12 May 2023 21:10:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239969AbjELVKN (ORCPT ); Fri, 12 May 2023 17:10:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34314 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239973AbjELVJj (ORCPT ); Fri, 12 May 2023 17:09:39 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F23C10A0A; Fri, 12 May 2023 14:08:14 -0700 (PDT) Message-ID: <20230512205256.423407127@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1683925646; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=1rKmHmZAbJT08KPFiwa9L0tLfU68vKcUg6wSGkvdHSU=; b=oQbDXlWOfM0QoeIS7fBd1ZDXblWtvo4+j/S5K5bAfdSwYbOZAW2gSSiDW+OyvgXb/rXSk2 +/cGl59AtTsBjSk7vwY4uPAQTNLGgW4cVw5R5p0BPvJlNfEfRufZHS7KlZQMVwbmLXJxKb s6aysUoLSt4mwWJ1tetWxrFJExUFkRYOeNoA8l6IhoUsrN3BFbahOePPwtYjw/So5TqD69 2Cz8B/vuKEN0Yd2xMrYK0SQw8UE0HL8tHmnz5xhL98Md0D/lNdYrgp9dQF6YIxHSHgOtX9 NYORCVxryHyjQrZpsLDTBMe0I9/XQv2spHgCmpdrV7IAP/NBWUx7dh/o0Bd+yA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1683925646; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=1rKmHmZAbJT08KPFiwa9L0tLfU68vKcUg6wSGkvdHSU=; b=UfaoCMCiIYhlxZAtqiJhn7AYfYyYM9daYEgmWZsmdMFm1YeoGErpKjOwH0iDKmu7TweDmj Ryr2+mWXOS82QjCw== From: Thomas Gleixner To: LKML Cc: x86@kernel.org, David Woodhouse , Andrew Cooper , Brian Gerst , Arjan van de Veen , Paolo Bonzini , Paul McKenney , Tom Lendacky , Sean Christopherson , Oleksandr Natalenko , Paul Menzel , "Guilherme G. Piccoli" , Piotr Gorski , Usama Arif , Juergen Gross , Boris Ostrovsky , xen-devel@lists.xenproject.org, Russell King , Arnd Bergmann , linux-arm-kernel@lists.infradead.org, Catalin Marinas , Will Deacon , Guo Ren , linux-csky@vger.kernel.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org, "James E.J. Bottomley" , Helge Deller , linux-parisc@vger.kernel.org, Paul Walmsley , Palmer Dabbelt , linux-riscv@lists.infradead.org, Mark Rutland , Sabin Rapan , "Michael Kelley (LINUX)" , Ross Philipson Subject: [patch V4 18/37] x86/xen/hvm: Get rid of DEAD_FROZEN handling References: <20230512203426.452963764@linutronix.de> MIME-Version: 1.0 Date: Fri, 12 May 2023 23:07:25 +0200 (CEST) Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org From: Thomas Gleixner No point in this conditional voodoo. Un-initializing the lock mechanism is safe to be called unconditionally even if it was already invoked when the CPU died. Remove the invocation of xen_smp_intr_free() as that has been already cleaned up in xen_cpu_dead_hvm(). Signed-off-by: Thomas Gleixner Tested-by: Michael Kelley --- arch/x86/xen/enlighten_hvm.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) --- a/arch/x86/xen/enlighten_hvm.c +++ b/arch/x86/xen/enlighten_hvm.c @@ -161,13 +161,12 @@ static int xen_cpu_up_prepare_hvm(unsign int rc = 0; /* - * This can happen if CPU was offlined earlier and - * offlining timed out in common_cpu_die(). + * If a CPU was offlined earlier and offlining timed out then the + * lock mechanism is still initialized. Uninit it unconditionally + * as it's safe to call even if already uninited. Interrupts and + * timer have already been handled in xen_cpu_dead_hvm(). */ - if (cpu_report_state(cpu) == CPU_DEAD_FROZEN) { - xen_smp_intr_free(cpu); - xen_uninit_lock_cpu(cpu); - } + xen_uninit_lock_cpu(cpu); if (cpu_acpi_id(cpu) != U32_MAX) per_cpu(xen_vcpu_id, cpu) = cpu_acpi_id(cpu);