From patchwork Sun Aug 30 09:12:47 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Michael S. Tsirkin" X-Patchwork-Id: 7096851 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 95A4FBEEC1 for ; Sun, 30 Aug 2015 09:13:31 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id D48FA20710 for ; Sun, 30 Aug 2015 09:13:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id EA57E2070C for ; Sun, 30 Aug 2015 09:13:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753269AbbH3JMy (ORCPT ); Sun, 30 Aug 2015 05:12:54 -0400 Received: from mx1.redhat.com ([209.132.183.28]:33101 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753218AbbH3JMu (ORCPT ); Sun, 30 Aug 2015 05:12:50 -0400 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (Postfix) with ESMTPS id 432A3C1CADD7; Sun, 30 Aug 2015 09:12:50 +0000 (UTC) Received: from redhat.com (ovpn-116-40.ams2.redhat.com [10.36.116.40]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with SMTP id t7U9CmAA000966; Sun, 30 Aug 2015 05:12:48 -0400 Date: Sun, 30 Aug 2015 12:12:47 +0300 From: "Michael S. Tsirkin" To: linux-kernel@vger.kernel.org Cc: kvm@vger.kernel.org, Paolo Bonzini Subject: [PATCH RFC 2/3] svm: allow ioeventfd for NPT page faults Message-ID: <1440925898-23440-3-git-send-email-mst@redhat.com> References: <1440925898-23440-1-git-send-email-mst@redhat.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <1440925898-23440-1-git-send-email-mst@redhat.com> X-Mutt-Fcc: =sent X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP MMIO is slightly slower than port IO because it uses the page-tables, so the CPU must do a pagewalk on each access. This overhead is normally masked by using the TLB cache: but not so for KVM MMIO, where PTEs are marked as reserved and so are never cached. As ioeventfd memory is never read, make it possible to use RO pages on the host for ioeventfds, instead. The result is that TLBs are cached, which finally makes MMIO as fast as port IO. Warning: untested. Signed-off-by: Michael S. Tsirkin --- arch/x86/kvm/svm.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 8e0c084..6422fac 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -1812,6 +1812,11 @@ static int pf_interception(struct vcpu_svm *svm) switch (svm->apf_reason) { default: error_code = svm->vmcb->control.exit_info_1; + if (!kvm_io_bus_write(&svm->vcpu, KVM_FAST_MMIO_BUS, + fault_address, 0, NULL)) { + skip_emulated_instruction(&svm->vcpu); + return 1; + } trace_kvm_page_fault(fault_address, error_code); if (!npt_enabled && kvm_event_needs_reinjection(&svm->vcpu))