From patchwork Sat Apr 27 00:53:38 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Scott Wood X-Patchwork-Id: 2496241 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id 12B6A3FE81 for ; Sat, 27 Apr 2013 00:54:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756189Ab3D0Axt (ORCPT ); Fri, 26 Apr 2013 20:53:49 -0400 Received: from am1ehsobe004.messaging.microsoft.com ([213.199.154.207]:9085 "EHLO am1outboundpool.messaging.microsoft.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755764Ab3D0Axr (ORCPT ); Fri, 26 Apr 2013 20:53:47 -0400 Received: from mail55-am1-R.bigfish.com (10.3.201.231) by AM1EHSOBE022.bigfish.com (10.3.207.144) with Microsoft SMTP Server id 14.1.225.23; Sat, 27 Apr 2013 00:53:45 +0000 Received: from mail55-am1 (localhost [127.0.0.1]) by mail55-am1-R.bigfish.com (Postfix) with ESMTP id 761D0440234; Sat, 27 Apr 2013 00:53:45 +0000 (UTC) X-Forefront-Antispam-Report: CIP:70.37.183.190; KIP:(null); UIP:(null); IPV:NLI; H:mail.freescale.net; RD:none; EFVD:NLI X-SpamScore: 0 X-BigFish: VS0(zzzz1f42h1fc6h1ee6h1de0h1fdah1202h1e76h1d1ah1d2ahzz8275bhz2dh2a8h668h839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah1d0ch1155h) Received: from mail55-am1 (localhost.localdomain [127.0.0.1]) by mail55-am1 (MessageSwitch) id 1367024024148904_2382; Sat, 27 Apr 2013 00:53:44 +0000 (UTC) Received: from AM1EHSMHS008.bigfish.com (unknown [10.3.201.254]) by mail55-am1.bigfish.com (Postfix) with ESMTP id 2241D18005E; Sat, 27 Apr 2013 00:53:44 +0000 (UTC) Received: from mail.freescale.net (70.37.183.190) by AM1EHSMHS008.bigfish.com (10.3.207.108) with Microsoft SMTP Server (TLS) id 14.1.225.23; Sat, 27 Apr 2013 00:53:44 +0000 Received: from tx30smr01.am.freescale.net (10.81.153.31) by 039-SN1MMR1-002.039d.mgd.msft.net (10.84.1.15) with Microsoft SMTP Server (TLS) id 14.2.328.11; Sat, 27 Apr 2013 00:53:42 +0000 Received: from snotra.am.freescale.net ([10.214.82.242]) by tx30smr01.am.freescale.net (8.14.3/8.14.0) with ESMTP id r3R0reVq003291; Fri, 26 Apr 2013 17:53:41 -0700 From: Scott Wood To: Alexander Graf CC: , , Scott Wood Subject: [PATCH 1/3] kvm/ppc/booke: Hold srcu lock when calling gfn functions Date: Fri, 26 Apr 2013 19:53:38 -0500 Message-ID: <1367024020-14204-1-git-send-email-scottwood@freescale.com> X-Mailer: git-send-email 1.7.10.4 MIME-Version: 1.0 X-OriginatorOrg: freescale.com Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org KVM core expects arch code to acquire the srcu lock when calling gfn_to_memslot and similar functions. Signed-off-by: Scott Wood --- arch/powerpc/kvm/44x_tlb.c | 5 +++++ arch/powerpc/kvm/booke.c | 19 +++++++++++++++++++ arch/powerpc/kvm/e500_mmu.c | 5 +++++ 3 files changed, 29 insertions(+) diff --git a/arch/powerpc/kvm/44x_tlb.c b/arch/powerpc/kvm/44x_tlb.c index 5dd3ab4..ed03854 100644 --- a/arch/powerpc/kvm/44x_tlb.c +++ b/arch/powerpc/kvm/44x_tlb.c @@ -441,6 +441,7 @@ int kvmppc_44x_emul_tlbwe(struct kvm_vcpu *vcpu, u8 ra, u8 rs, u8 ws) struct kvmppc_vcpu_44x *vcpu_44x = to_44x(vcpu); struct kvmppc_44x_tlbe *tlbe; unsigned int gtlb_index; + int idx; gtlb_index = kvmppc_get_gpr(vcpu, ra); if (gtlb_index >= KVM44x_GUEST_TLB_SIZE) { @@ -473,6 +474,8 @@ int kvmppc_44x_emul_tlbwe(struct kvm_vcpu *vcpu, u8 ra, u8 rs, u8 ws) return EMULATE_FAIL; } + idx = srcu_read_lock(&vcpu->kvm->srcu); + if (tlbe_is_host_safe(vcpu, tlbe)) { gva_t eaddr; gpa_t gpaddr; @@ -489,6 +492,8 @@ int kvmppc_44x_emul_tlbwe(struct kvm_vcpu *vcpu, u8 ra, u8 rs, u8 ws) kvmppc_mmu_map(vcpu, eaddr, gpaddr, gtlb_index); } + srcu_read_unlock(&vcpu->kvm->srcu, idx); + trace_kvm_gtlb_write(gtlb_index, tlbe->tid, tlbe->word0, tlbe->word1, tlbe->word2); diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c index 1020119..506c87d 100644 --- a/arch/powerpc/kvm/booke.c +++ b/arch/powerpc/kvm/booke.c @@ -832,6 +832,8 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu, { int r = RESUME_HOST; int s; + int idx = 0; /* silence bogus uninitialized warning */ + bool need_srcu = false; /* update before a new last_exit_type is rewritten */ kvmppc_update_timing_stats(vcpu); @@ -847,6 +849,20 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu, run->exit_reason = KVM_EXIT_UNKNOWN; run->ready_for_interrupt_injection = 1; + /* + * Don't get the srcu lock unconditionally, because kvm_ppc_pv() + * can call kvm_vcpu_block(), and kvm_ppc_pv() is shared with + * book3s, so dropping the srcu lock there would be awkward. + */ + switch (exit_nr) { + case BOOKE_INTERRUPT_ITLB_MISS: + case BOOKE_INTERRUPT_DTLB_MISS: + need_srcu = true; + } + + if (need_srcu) + idx = srcu_read_lock(&vcpu->kvm->srcu); + switch (exit_nr) { case BOOKE_INTERRUPT_MACHINE_CHECK: printk("MACHINE CHECK: %lx\n", mfspr(SPRN_MCSR)); @@ -1138,6 +1154,9 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu, BUG(); } + if (need_srcu) + srcu_read_unlock(&vcpu->kvm->srcu, idx); + /* * To avoid clobbering exit_reason, only check for signals if we * aren't already exiting to userspace for some other reason. diff --git a/arch/powerpc/kvm/e500_mmu.c b/arch/powerpc/kvm/e500_mmu.c index c41a5a9..6d6f153 100644 --- a/arch/powerpc/kvm/e500_mmu.c +++ b/arch/powerpc/kvm/e500_mmu.c @@ -396,6 +396,7 @@ int kvmppc_e500_emul_tlbwe(struct kvm_vcpu *vcpu) struct kvm_book3e_206_tlb_entry *gtlbe; int tlbsel, esel; int recal = 0; + int idx; tlbsel = get_tlb_tlbsel(vcpu); esel = get_tlb_esel(vcpu, tlbsel); @@ -430,6 +431,8 @@ int kvmppc_e500_emul_tlbwe(struct kvm_vcpu *vcpu) kvmppc_set_tlb1map_range(vcpu, gtlbe); } + idx = srcu_read_lock(&vcpu->kvm->srcu); + /* Invalidate shadow mappings for the about-to-be-clobbered TLBE. */ if (tlbe_is_host_safe(vcpu, gtlbe)) { u64 eaddr = get_tlb_eaddr(gtlbe); @@ -444,6 +447,8 @@ int kvmppc_e500_emul_tlbwe(struct kvm_vcpu *vcpu) kvmppc_mmu_map(vcpu, eaddr, raddr, index_of(tlbsel, esel)); } + srcu_read_unlock(&vcpu->kvm->srcu, idx); + kvmppc_set_exit_type(vcpu, EMULATED_TLBWE_EXITS); return EMULATE_DONE; }