From patchwork Tue May 24 14:47:45 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yongji Xie X-Patchwork-Id: 9133799 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id E78C5607D5 for ; Tue, 24 May 2016 14:52:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DAF372823B for ; Tue, 24 May 2016 14:52:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CFC3A2828B; Tue, 24 May 2016 14:52:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 45B352823B for ; Tue, 24 May 2016 14:52:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932499AbcEXOwr (ORCPT ); Tue, 24 May 2016 10:52:47 -0400 Received: from e23smtp07.au.ibm.com ([202.81.31.140]:57937 "EHLO e23smtp07.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932079AbcEXOwq (ORCPT ); Tue, 24 May 2016 10:52:46 -0400 Received: from localhost by e23smtp07.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 25 May 2016 00:52:43 +1000 Received: from d23dlp01.au.ibm.com (202.81.31.203) by e23smtp07.au.ibm.com (202.81.31.204) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 25 May 2016 00:52:41 +1000 X-IBM-Helo: d23dlp01.au.ibm.com X-IBM-MailFrom: xyjxie@linux.vnet.ibm.com X-IBM-RcptTo: kvm@vger.kernel.org Received: from d23relay07.au.ibm.com (d23relay07.au.ibm.com [9.190.26.37]) by d23dlp01.au.ibm.com (Postfix) with ESMTP id 447AA2CE8057 for ; Wed, 25 May 2016 00:52:41 +1000 (EST) Received: from d23av04.au.ibm.com (d23av04.au.ibm.com [9.190.235.139]) by d23relay07.au.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id u4OEqXEh58851410 for ; Wed, 25 May 2016 00:52:41 +1000 Received: from d23av04.au.ibm.com (localhost [127.0.0.1]) by d23av04.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id u4OEq7j1031120 for ; Wed, 25 May 2016 00:52:08 +1000 Received: from localhost (chinaltcdragon.cn.ibm.com [9.186.9.18]) by d23av04.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id u4OEq7tp030296; Wed, 25 May 2016 00:52:07 +1000 From: Yongji Xie To: qemu-devel@nongnu.org Cc: kvm@vger.kernel.org, alex.williamson@redhat.com, aik@ozlabs.ru, zhong@linux.vnet.ibm.com, nikunj@linux.vnet.ibm.com, gwshan@linux.vnet.ibm.com, kevin.tian@intel.com Subject: [RFC PATCH] vfio: Add support for mmapping sub-page MMIO BARs Date: Tue, 24 May 2016 22:47:45 +0800 Message-Id: <1464101265-28080-1-git-send-email-xyjxie@linux.vnet.ibm.com> X-Mailer: git-send-email 1.7.1 X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 16052414-0025-0000-0000-000004C6B045 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Now the kernel patch [1] allows VFIO to mmap sub-page BARs. This is the corresponding QEMU patch. With those patches applied, we could passthrough sub-page BARs to guest, which can help to improve IO performance for some devices. In this patch, we expand MemoryRegions of these sub-page MMIO BARs to PAGE_SIZE in vfio_pci_write_config(), so that the BARs could be passed to KVM ioctl KVM_SET_USER_MEMORY_REGION with a valid size. The expanding size will be recovered when the base address of sub-page BAR is changed and not page aligned any more in guest. And we also set the priority of these BARs' memory regions to zero in case of overlap with BARs which share the same page with sub-page BARs in guest. [1] http://www.spinics.net/lists/kvm/msg132382.html Signed-off-by: Yongji Xie --- hw/vfio/common.c | 3 +-- hw/vfio/pci.c | 69 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 70 insertions(+), 2 deletions(-) diff --git a/hw/vfio/common.c b/hw/vfio/common.c index 88154a1..b898532 100644 --- a/hw/vfio/common.c +++ b/hw/vfio/common.c @@ -522,8 +522,7 @@ int vfio_region_setup(Object *obj, VFIODevice *vbasedev, VFIORegion *region, region, name, region->size); if (!vbasedev->no_mmap && - region->flags & VFIO_REGION_INFO_FLAG_MMAP && - !(region->size & ~qemu_real_host_page_mask)) { + region->flags & VFIO_REGION_INFO_FLAG_MMAP) { region->nr_mmaps = 1; region->mmaps = g_new0(VFIOMmap, region->nr_mmaps); diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c index d091d8c..edf9c8d 100644 --- a/hw/vfio/pci.c +++ b/hw/vfio/pci.c @@ -1057,6 +1057,58 @@ static const MemoryRegionOps vfio_vga_ops = { }; /* + * Expand memory regions of sub-page(size < PAGE_SIZE) MMIO BARs to page + * size if the BARs are in an exclusive page in host. And we should set + * the priority of these memory regions to zero in case of overlap with + * BARs which share the same page with sub-page BARs in guest. If the + * base address of sub-page BARs are changed and not page aligned any + * more, we should recover their sizes. + */ +static void vfio_sub_page_bar_update_mapping(PCIDevice *pdev, int bar) +{ + VFIOPCIDevice *vdev = DO_UPCAST(VFIOPCIDevice, pdev, pdev); + MemoryRegion *mmap_mr; + MemoryRegion *mr; + PCIIORegion *r; + pcibus_t bar_addr; + + if (vdev->bars[bar].region.nr_mmaps != 1) { + return; + } + + r = &pdev->io_regions[bar]; + bar_addr = r->addr; + if (bar_addr == PCI_BAR_UNMAPPED) { + return; + } + + memory_region_transaction_begin(); + mr = vdev->bars[bar].region.mem; + mmap_mr = &vdev->bars[bar].region.mmaps[0].mem; + if (memory_region_size(mr) == qemu_real_host_page_size) { + if (bar_addr & ~qemu_real_host_page_mask) { + memory_region_set_size(mr, r->size); + memory_region_set_size(mmap_mr, r->size); + } else if (memory_region_is_mapped(mr)) { + memory_region_del_subregion(r->address_space, mr); + memory_region_add_subregion_overlap(r->address_space, + bar_addr, mr, 0); + } + } else { + if (!(bar_addr & ~qemu_real_host_page_mask) && + memory_region_is_mapped(mr) && + vdev->bars[bar].region.mmaps[0].mmap) { + memory_region_del_subregion(r->address_space, mr); + memory_region_set_size(mr, qemu_real_host_page_size); + memory_region_set_size(mmap_mr, qemu_real_host_page_size); + memory_region_add_subregion_overlap(r->address_space, + bar_addr, mr, 0); + } + } + memory_region_transaction_commit(); +} + +/* * PCI config space */ uint32_t vfio_pci_read_config(PCIDevice *pdev, uint32_t addr, int len) @@ -1139,6 +1191,23 @@ void vfio_pci_write_config(PCIDevice *pdev, } else if (was_enabled && !is_enabled) { vfio_msix_disable(vdev); } + } else if (ranges_overlap(addr, len, PCI_BASE_ADDRESS_0, 24) || + range_covers_byte(addr, len, PCI_COMMAND)) { + pcibus_t old_addr[PCI_NUM_REGIONS - 1]; + int bar; + + for (bar = 0; bar < PCI_ROM_SLOT; bar++) { + old_addr[bar] = pdev->io_regions[bar].addr; + } + + pci_default_write_config(pdev, addr, val, len); + + for (bar = 0; bar < PCI_ROM_SLOT; bar++) { + if (old_addr[bar] != pdev->io_regions[bar].addr && + pdev->io_regions[bar].size > 0 && + pdev->io_regions[bar].size < qemu_real_host_page_size) + vfio_sub_page_bar_update_mapping(pdev, bar); + } } else { /* Write everything to QEMU to keep emulated bits correct */ pci_default_write_config(pdev, addr, val, len);