From patchwork Mon Mar 1 14:34:37 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joerg Roedel X-Patchwork-Id: 82941 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.3/8.14.3) with ESMTP id o21EaTFS018329 for ; Mon, 1 Mar 2010 14:36:30 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751015Ab0CAOgN (ORCPT ); Mon, 1 Mar 2010 09:36:13 -0500 Received: from tx2ehsobe002.messaging.microsoft.com ([65.55.88.12]:30560 "EHLO TX2EHSOBE004.bigfish.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750747Ab0CAOfF (ORCPT ); Mon, 1 Mar 2010 09:35:05 -0500 Received: from mail125-tx2-R.bigfish.com (10.9.14.239) by TX2EHSOBE004.bigfish.com (10.9.40.24) with Microsoft SMTP Server id 8.1.340.0; Mon, 1 Mar 2010 14:35:04 +0000 Received: from mail125-tx2 (localhost [127.0.0.1]) by mail125-tx2-R.bigfish.com (Postfix) with ESMTP id A19F8F08446; Mon, 1 Mar 2010 14:35:04 +0000 (UTC) X-SpamScore: -4 X-BigFish: VPS-4(zz936eMab9bhzz1202hzzz32i2a8h87h6bh61h) X-Spam-TCS-SCL: 0:0 X-FB-DOMAIN-IP-MATCH: fail Received: from mail125-tx2 (localhost.localdomain [127.0.0.1]) by mail125-tx2 (MessageSwitch) id 1267454103849393_31225; Mon, 1 Mar 2010 14:35:03 +0000 (UTC) Received: from TX2EHSMHS027.bigfish.com (unknown [10.9.14.242]) by mail125-tx2.bigfish.com (Postfix) with ESMTP id CA98A6B804E; Mon, 1 Mar 2010 14:35:03 +0000 (UTC) Received: from ausb3extmailp02.amd.com (163.181.251.22) by TX2EHSMHS027.bigfish.com (10.9.99.127) with Microsoft SMTP Server (TLS) id 14.0.482.39; Mon, 1 Mar 2010 14:35:00 +0000 Received: from ausb3twp02.amd.com ([163.181.250.38]) by ausb3extmailp02.amd.com (Switch-3.2.7/Switch-3.2.7) with ESMTP id o21EcD8n009833; Mon, 1 Mar 2010 08:38:16 -0600 X-WSS-ID: 0KYLXU3-02-IRR-02 X-M-MSG: Received: from sausexbh1.amd.com (sausexbh1.amd.com [163.181.22.101]) by ausb3twp02.amd.com (Tumbleweed MailGate 3.7.2) with ESMTP id 26508C861C; Mon, 1 Mar 2010 08:34:50 -0600 (CST) Received: from sausexmb1.amd.com ([163.181.3.156]) by sausexbh1.amd.com with Microsoft SMTPSVC(6.0.3790.3959); Mon, 1 Mar 2010 08:34:55 -0600 Received: from seurexmb1.amd.com ([165.204.9.130]) by sausexmb1.amd.com with Microsoft SMTPSVC(6.0.3790.3959); Mon, 1 Mar 2010 08:34:54 -0600 Received: from lemmy.osrc.amd.com ([165.204.15.93]) by seurexmb1.amd.com with Microsoft SMTPSVC(6.0.3790.3959); Mon, 1 Mar 2010 15:34:46 +0100 Received: by lemmy.osrc.amd.com (Postfix, from userid 41430) id 234E3C9B89; Mon, 1 Mar 2010 15:34:46 +0100 (CET) From: Joerg Roedel To: Avi Kivity , Marcelo Tosatti CC: Alexander Graf , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Joerg Roedel Subject: [PATCH 4/7] KVM: SVM: Optimize nested svm msrpm merging Date: Mon, 1 Mar 2010 15:34:37 +0100 Message-ID: <1267454080-2513-5-git-send-email-joerg.roedel@amd.com> X-Mailer: git-send-email 1.7.0 In-Reply-To: <1267454080-2513-1-git-send-email-joerg.roedel@amd.com> References: <1267454080-2513-1-git-send-email-joerg.roedel@amd.com> X-OriginalArrivalTime: 01 Mar 2010 14:34:46.0246 (UTC) FILETIME=[570AD060:01CAB94C] MIME-Version: 1.0 X-Reverse-DNS: ausb3extmailp02.amd.com Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Mon, 01 Mar 2010 14:36:30 +0000 (UTC) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 18d7938..c04ce1e 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -92,6 +92,9 @@ struct nested_state { }; +#define MSRPM_OFFSETS 16 +static u32 msrpm_offsets[MSRPM_OFFSETS] __read_mostly; + struct vcpu_svm { struct kvm_vcpu vcpu; struct vmcb *vmcb; @@ -509,6 +512,49 @@ static void svm_vcpu_init_msrpm(u32 *msrpm) } } +static void add_msr_offset(u32 offset) +{ + int i; + + for (i = 0; i < MSRPM_OFFSETS; ++i) { + + /* Offset already in list? */ + if (msrpm_offsets[i] == offset) + return; + + /* Slot used by another offset? */ + if (msrpm_offsets[i] != MSR_INVALID) + continue; + + /* Add offset to list */ + msrpm_offsets[i] = offset; + + return; + } + + /* + * If this BUG triggers the msrpm_offsets table has an overflow. Just + * increase MSRPM_OFFSETS in this case. + */ + BUG(); +} + +static void init_msrpm_offsets(void) +{ + int i; + + memset(msrpm_offsets, 0xff, sizeof(msrpm_offsets)); + + for (i = 0; direct_access_msrs[i].index != MSR_INVALID; i++) { + u32 offset; + + offset = svm_msrpm_offset(direct_access_msrs[i].index); + BUG_ON(offset == MSR_INVALID); + + add_msr_offset(offset); + } +} + static void svm_enable_lbrv(struct vcpu_svm *svm) { u32 *msrpm = svm->msrpm; @@ -547,6 +593,8 @@ static __init int svm_hardware_setup(void) memset(iopm_va, 0xff, PAGE_SIZE * (1 << IOPM_ALLOC_ORDER)); iopm_base = page_to_pfn(iopm_pages) << PAGE_SHIFT; + init_msrpm_offsets(); + if (boot_cpu_has(X86_FEATURE_NX)) kvm_enable_efer_bits(EFER_NX); @@ -811,6 +859,7 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id) svm->nested.hsave = page_address(hsave_page); svm->nested.msrpm = page_address(nested_msrpm_pages); + svm_vcpu_init_msrpm(svm->nested.msrpm); svm->vmcb = page_address(page); clear_page(svm->vmcb); @@ -1882,20 +1931,33 @@ static int nested_svm_vmexit(struct vcpu_svm *svm) static bool nested_svm_vmrun_msrpm(struct vcpu_svm *svm) { - u32 *nested_msrpm; - struct page *page; + /* + * This function merges the msr permission bitmaps of kvm and the + * nested vmcb. It is omptimized in that it only merges the parts where + * the kvm msr permission bitmap may contain zero bits + */ int i; - nested_msrpm = nested_svm_map(svm, svm->nested.vmcb_msrpm, &page); - if (!nested_msrpm) - return false; + if (!(svm->nested.intercept & (1ULL << INTERCEPT_MSR_PROT))) + return true; - for (i = 0; i < PAGE_SIZE * (1 << MSRPM_ALLOC_ORDER) / 4; i++) - svm->nested.msrpm[i] = svm->msrpm[i] | nested_msrpm[i]; + for (i = 0; i < MSRPM_OFFSETS; i++) { + u32 value, p; + u64 offset; - svm->vmcb->control.msrpm_base_pa = __pa(svm->nested.msrpm); + if (msrpm_offsets[i] == 0xffffffff) + break; - nested_svm_unmap(page); + offset = svm->nested.vmcb_msrpm + msrpm_offsets[i]; + p = msrpm_offsets[i] / 4; + + if (kvm_read_guest(svm->vcpu.kvm, offset, &value, 4)) + return false; + + svm->nested.msrpm[p] = svm->msrpm[p] | value; + } + + svm->vmcb->control.msrpm_base_pa = __pa(svm->nested.msrpm); return true; }