From patchwork Thu Aug 4 09:15:14 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthias Brugger X-Patchwork-Id: 9263087 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 51D5F6048F for ; Thu, 4 Aug 2016 09:18:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4201A28333 for ; Thu, 4 Aug 2016 09:18:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3671628383; Thu, 4 Aug 2016 09:18:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D0B4128383 for ; Thu, 4 Aug 2016 09:18:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933342AbcHDJRl (ORCPT ); Thu, 4 Aug 2016 05:17:41 -0400 Received: from smtp.nue.novell.com ([195.135.221.5]:36788 "EHLO smtp.nue.novell.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756022AbcHDJQB (ORCPT ); Thu, 4 Aug 2016 05:16:01 -0400 Received: from nwb-ext-pat.microfocus.com ([10.120.13.103]) by smtp.nue.novell.com with ESMTP (TLS encrypted); Thu, 04 Aug 2016 11:15:58 +0200 Received: from linux-gy6r.site (nwb-a10-snat.microfocus.com [10.120.13.201]) by nwb-ext-pat.microfocus.com with ESMTP (TLS encrypted); Thu, 04 Aug 2016 10:15:33 +0100 From: Matthias Brugger To: pbonzini@redhat.com, rkrcmar@redhat.com, christoffer.dall@linaro.org, marc.zyngier@arm.com, linux@armlinux.org.uk, catalin.marinas@arm.com, will.deacon@arm.com Cc: suzuki.poulose@arm.com, james.morse@arm.com, david.daney@cavium.com, rrichter@cavium.com, agraf@suse.de, mbrugger@suse.com, mark.rutland@arm.com, lorenzo.pieralisi@arm.com, dave.long@linaro.org, ard.biesheuvel@linaro.org, zlim.lnx@gmail.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org Subject: [PATCH 1/4] arm64: insn: Do not disable irqs during patching Date: Thu, 4 Aug 2016 11:15:14 +0200 Message-Id: <1470302117-32296-2-git-send-email-mbrugger@suse.com> X-Mailer: git-send-email 2.6.6 In-Reply-To: <1470302117-32296-1-git-send-email-mbrugger@suse.com> References: <1470302117-32296-1-git-send-email-mbrugger@suse.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Robert Richter __aarch64_insn_write() is always called with interrupts enabled. Thus, there is no need to use an irqsave variant for the spin lock. This change should also address the fix of: commit abffa6f3b157 ("arm64: convert patch_lock to raw lock") We need to enable interrupts to allow cpu sync for code patching using smp_call_function*(). Signed-off-by: Robert Richter Signed-off-by: Matthias Brugger --- arch/arm64/kernel/insn.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c index 63f9432..138bd8a 100644 --- a/arch/arm64/kernel/insn.c +++ b/arch/arm64/kernel/insn.c @@ -86,7 +86,7 @@ bool aarch64_insn_is_branch_imm(u32 insn) aarch64_insn_is_bcond(insn)); } -static DEFINE_RAW_SPINLOCK(patch_lock); +static DEFINE_SPINLOCK(patch_lock); static void __kprobes *patch_map(void *addr, int fixmap) { @@ -129,16 +129,15 @@ int __kprobes aarch64_insn_read(void *addr, u32 *insnp) static int __kprobes __aarch64_insn_write(void *addr, u32 insn) { void *waddr = addr; - unsigned long flags = 0; int ret; - raw_spin_lock_irqsave(&patch_lock, flags); + spin_lock(&patch_lock); waddr = patch_map(addr, FIX_TEXT_POKE0); ret = probe_kernel_write(waddr, &insn, AARCH64_INSN_SIZE); patch_unmap(FIX_TEXT_POKE0); - raw_spin_unlock_irqrestore(&patch_lock, flags); + spin_unlock(&patch_lock); return ret; }