From patchwork Wed Jul 10 08:52:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilias Stamatis X-Patchwork-Id: 13729064 Received: from smtp-fw-52002.amazon.com (smtp-fw-52002.amazon.com [52.119.213.150]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B8C3E152E0A for ; Wed, 10 Jul 2024 08:54:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=52.119.213.150 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720601699; cv=none; b=r4zgQk60QR8Hy0F80bc/hBmZZa2w4bbqDVCY8jPFj9i7MdWsoK6FTLYPF5zzD9iV04v+jo/qXBtQVLBCGsG6SMmiRw6L2WRipmCi48jOXJPlwGjtHO96P5SmotLD0Buhn/YM03ACAdptwCPMSWXRLwImNImTq4uIUsSfd3bd7Xs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720601699; c=relaxed/simple; bh=pfCF8+RH/fLu4XqcZsWj9azT4x70khk79/0BE0m+NsE=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=RJyUQYyLun+W85L703dmcok7xF2AaS3zykrET3eiFcMDrubfrBLI2F57oDRqORMSIfjFpbaX0MZVBSGP42x5e0AeiHBiuNCuHJoM30VS0U2qud0Uy7f60tmmeb6GHca5JRhV2fqDpBhlSVvOLZEYUrVv7lWrlpquxcSIzM47RZo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com; spf=pass smtp.mailfrom=amazon.co.uk; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b=KXfeOi/Q; arc=none smtp.client-ip=52.119.213.150 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=amazon.co.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="KXfeOi/Q" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1720601698; x=1752137698; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JFS2iURKkmZi+t0llzJ0QX8JVMbmrzpQBj9YDbn+D7Y=; b=KXfeOi/Q2sjtW0oBuXKPm5ZwJyMOedsIk3NLm9bdRo7NhTZB9gCwqPRJ u+CkffzOIim9tPfvcPE+2Mfj+DAbkDfZgQVgttzzWMdawsYJo2nVutu1t YNdgtTryB6dIOUHXQXqzNPrQ8aKIoFCRoJYciNCpnw9TmgynVz2KrIUqj g=; X-IronPort-AV: E=Sophos;i="6.09,197,1716249600"; d="scan'208";a="644908580" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO smtpout.prod.us-east-1.prod.farcaster.email.amazon.dev) ([10.43.8.6]) by smtp-border-fw-52002.iad7.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Jul 2024 08:54:55 +0000 Received: from EX19MTAEUA001.ant.amazon.com [10.0.17.79:18712] by smtpin.naws.eu-west-1.prod.farcaster.email.amazon.dev [10.0.20.169:2525] with esmtp (Farcaster) id e7acea66-2df8-4494-ab1f-618a9a924c21; Wed, 10 Jul 2024 08:54:53 +0000 (UTC) X-Farcaster-Flow-ID: e7acea66-2df8-4494-ab1f-618a9a924c21 Received: from EX19D018EUA002.ant.amazon.com (10.252.50.146) by EX19MTAEUA001.ant.amazon.com (10.252.50.192) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Wed, 10 Jul 2024 08:54:53 +0000 Received: from u94b036d6357a55.ant.amazon.com (10.106.83.14) by EX19D018EUA002.ant.amazon.com (10.252.50.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Wed, 10 Jul 2024 08:54:49 +0000 From: Ilias Stamatis To: , CC: , , , , , , , , Subject: [PATCH 1/6] KVM: Fix coalesced_mmio_has_room() Date: Wed, 10 Jul 2024 09:52:54 +0100 Message-ID: <20240710085259.2125131-2-ilstam@amazon.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240710085259.2125131-1-ilstam@amazon.com> References: <20240710085259.2125131-1-ilstam@amazon.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: EX19D046UWB001.ant.amazon.com (10.13.139.187) To EX19D018EUA002.ant.amazon.com (10.252.50.146) The following calculation used in coalesced_mmio_has_room() to check whether the ring buffer is full is wrong and only allows half the buffer to be used. avail = (ring->first - last - 1) % KVM_COALESCED_MMIO_MAX; if (avail == 0) /* full */ The % operator in C is not the modulo operator but the remainder operator. Modulo and remainder operators differ with respect to negative values. But all values are unsigned in this case anyway. The above might have worked as expected in python for example: >>> (-86) % 170 84 However it doesn't work the same way in C. printf("avail: %d\n", (-86) % 170); printf("avail: %u\n", (-86) % 170); printf("avail: %u\n", (-86u) % 170u); Using gcc-11 these print: avail: -86 avail: 4294967210 avail: 0 Fix the calculation and allow all but one entries in the buffer to be used as originally intended. Fixes: 105f8d40a737 ("KVM: Calculate available entries in coalesced mmio ring") Signed-off-by: Ilias Stamatis Reviewed-by: Paul Durrant --- virt/kvm/coalesced_mmio.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c index 1b90acb6e3fe..184c5c40c9c1 100644 --- a/virt/kvm/coalesced_mmio.c +++ b/virt/kvm/coalesced_mmio.c @@ -43,7 +43,6 @@ static int coalesced_mmio_in_range(struct kvm_coalesced_mmio_dev *dev, static int coalesced_mmio_has_room(struct kvm_coalesced_mmio_dev *dev, u32 last) { struct kvm_coalesced_mmio_ring *ring; - unsigned avail; /* Are we able to batch it ? */ @@ -52,8 +51,7 @@ static int coalesced_mmio_has_room(struct kvm_coalesced_mmio_dev *dev, u32 last) * there is always one unused entry in the buffer */ ring = dev->kvm->coalesced_mmio_ring; - avail = (ring->first - last - 1) % KVM_COALESCED_MMIO_MAX; - if (avail == 0) { + if ((last + 1) % KVM_COALESCED_MMIO_MAX == READ_ONCE(ring->first)) { /* full */ return 0; } From patchwork Wed Jul 10 08:52:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilias Stamatis X-Patchwork-Id: 13729065 Received: from smtp-fw-52004.amazon.com (smtp-fw-52004.amazon.com [52.119.213.154]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EDF1617C20F for ; Wed, 10 Jul 2024 08:55:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=52.119.213.154 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720601745; cv=none; b=RW33ztOexrmgUAWWvNKODopxVaj5GZrmlJ/q/tAOBD7QYxFX2SpAgBeFul5go2ohD9lIG/6XBMwpidyon30V7jnM0Zml7t8jQF3tXSsN59Bx3MS8k4dAAxXxVikAl5jaeQT7eVnpcPNVfqoI5PygOomieX2ra1y0esDAnODUlAU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720601745; c=relaxed/simple; bh=wfl9JYM8rP7BOmyeQtvRaph3Er7MYJPlNnrs5oLUYPw=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=WFsKtmygJr0RqrIN69NU8J5lWbk7+jLgRFjbWybDxzIWMI2Z66AvEfZnZUhvtK87K8DpZxL3ROo9MrC5RhdMMQdE785O+hT4abMXXCpPske38xWDevOJN1P1cbA4djqLzMNSa627viHx6I9auyruvyPSx8iMuCDw9RityyVr/Dw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com; spf=pass smtp.mailfrom=amazon.co.uk; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b=kFZ/R7Ml; arc=none smtp.client-ip=52.119.213.154 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=amazon.co.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="kFZ/R7Ml" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1720601744; x=1752137744; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Ay0Joflfus6zxgO9MNlc62rL0fZ1ofwysLXXepkpfHQ=; b=kFZ/R7Mlu2YhY8U0IlOuBwOF48Tv8F4/IvmPFICEDnun6tCCLKKASo66 wZmqBrAcYupsgCHWj7TUzkG7rCzarDZ6/aJAF0xzXb8SslJT74UcyJMbM dvlv9hklD0d3OsKoBWPbY8XVdiRfjuEXz1r8EtbvOnll1iOsN3qNcIUrq Q=; X-IronPort-AV: E=Sophos;i="6.09,197,1716249600"; d="scan'208";a="217476273" Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO smtpout.prod.us-east-1.prod.farcaster.email.amazon.dev) ([10.43.8.2]) by smtp-border-fw-52004.iad7.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Jul 2024 08:55:41 +0000 Received: from EX19MTAEUC001.ant.amazon.com [10.0.43.254:38274] by smtpin.naws.eu-west-1.prod.farcaster.email.amazon.dev [10.0.34.96:2525] with esmtp (Farcaster) id dff4b126-f3a8-4400-ba7f-23d491d28be5; Wed, 10 Jul 2024 08:55:40 +0000 (UTC) X-Farcaster-Flow-ID: dff4b126-f3a8-4400-ba7f-23d491d28be5 Received: from EX19D018EUA002.ant.amazon.com (10.252.50.146) by EX19MTAEUC001.ant.amazon.com (10.252.51.193) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Wed, 10 Jul 2024 08:55:38 +0000 Received: from u94b036d6357a55.ant.amazon.com (10.106.83.14) by EX19D018EUA002.ant.amazon.com (10.252.50.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Wed, 10 Jul 2024 08:55:34 +0000 From: Ilias Stamatis To: , CC: , , , , , , , , Subject: [PATCH 2/6] KVM: Add KVM_CREATE_COALESCED_MMIO_BUFFER ioctl Date: Wed, 10 Jul 2024 09:52:55 +0100 Message-ID: <20240710085259.2125131-3-ilstam@amazon.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240710085259.2125131-1-ilstam@amazon.com> References: <20240710085259.2125131-1-ilstam@amazon.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: EX19D046UWB001.ant.amazon.com (10.13.139.187) To EX19D018EUA002.ant.amazon.com (10.252.50.146) The current MMIO coalescing design has a few drawbacks which limit its usefulness. Currently all coalesced MMIO zones use the same ring buffer. That means that upon a userspace exit we have to handle potentially unrelated MMIO writes synchronously. And a VM-wide lock needs to be taken in the kernel when an MMIO exit occurs. Additionally, there is no direct way for userspace to be notified about coalesced MMIO writes. If the next MMIO exit to userspace is when the ring buffer has filled then a substantial (and unbounded) amount of time may have passed since the first coalesced MMIO. Add a KVM_CREATE_COALESCED_MMIO_BUFFER ioctl to KVM. This ioctl simply returns a file descriptor to the caller but does not allocate a ring buffer. Userspace can then pass this fd to mmap() to actually allocate a buffer and map it to its address space. Subsequent patches will allow userspace to: - Associate the fd with a coalescing zone when registering it so that writes to that zone are accumulated in that specific ring buffer rather than the VM-wide one. - Poll for MMIO writes using this fd. Signed-off-by: Ilias Stamatis Reviewed-by: Paul Durrant --- include/linux/kvm_host.h | 1 + include/uapi/linux/kvm.h | 2 + virt/kvm/coalesced_mmio.c | 142 +++++++++++++++++++++++++++++++++++--- virt/kvm/coalesced_mmio.h | 9 +++ virt/kvm/kvm_main.c | 4 ++ 5 files changed, 150 insertions(+), 8 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 692c01e41a18..c7b53c020cd2 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -799,6 +799,7 @@ struct kvm { struct kvm_coalesced_mmio_ring *coalesced_mmio_ring; spinlock_t ring_lock; struct list_head coalesced_zones; + struct list_head coalesced_buffers; #endif struct mutex irq_lock; diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index d03842abae57..6d6f132e6203 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1548,4 +1548,6 @@ struct kvm_create_guest_memfd { __u64 reserved[6]; }; +#define KVM_CREATE_COALESCED_MMIO_BUFFER _IO(KVMIO, 0xd5) + #endif /* __LINUX_KVM_H */ diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c index 184c5c40c9c1..6443d4b62548 100644 --- a/virt/kvm/coalesced_mmio.c +++ b/virt/kvm/coalesced_mmio.c @@ -4,6 +4,7 @@ * * Copyright (c) 2008 Bull S.A.S. * Copyright 2009 Red Hat, Inc. and/or its affiliates. + * Copyright 2024 Amazon.com, Inc. or its affiliates. All Rights Reserved. * * Author: Laurent Vivier * @@ -14,6 +15,7 @@ #include #include #include +#include #include "coalesced_mmio.h" @@ -40,17 +42,14 @@ static int coalesced_mmio_in_range(struct kvm_coalesced_mmio_dev *dev, return 1; } -static int coalesced_mmio_has_room(struct kvm_coalesced_mmio_dev *dev, u32 last) +static int coalesced_mmio_has_room(struct kvm_coalesced_mmio_ring *ring, u32 last) { - struct kvm_coalesced_mmio_ring *ring; - /* Are we able to batch it ? */ /* last is the first free entry * check if we don't meet the first used entry * there is always one unused entry in the buffer */ - ring = dev->kvm->coalesced_mmio_ring; if ((last + 1) % KVM_COALESCED_MMIO_MAX == READ_ONCE(ring->first)) { /* full */ return 0; @@ -65,17 +64,28 @@ static int coalesced_mmio_write(struct kvm_vcpu *vcpu, { struct kvm_coalesced_mmio_dev *dev = to_mmio(this); struct kvm_coalesced_mmio_ring *ring = dev->kvm->coalesced_mmio_ring; + spinlock_t *lock = dev->buffer_dev ? + &dev->buffer_dev->ring_lock : + &dev->kvm->ring_lock; __u32 insert; if (!coalesced_mmio_in_range(dev, addr, len)) return -EOPNOTSUPP; - spin_lock(&dev->kvm->ring_lock); + spin_lock(lock); + + if (dev->buffer_dev) { + ring = dev->buffer_dev->ring; + if (!ring) { + spin_unlock(lock); + return -EOPNOTSUPP; + } + } insert = READ_ONCE(ring->last); - if (!coalesced_mmio_has_room(dev, insert) || + if (!coalesced_mmio_has_room(ring, insert) || insert >= KVM_COALESCED_MMIO_MAX) { - spin_unlock(&dev->kvm->ring_lock); + spin_unlock(lock); return -EOPNOTSUPP; } @@ -87,7 +97,7 @@ static int coalesced_mmio_write(struct kvm_vcpu *vcpu, ring->coalesced_mmio[insert].pio = dev->zone.pio; smp_wmb(); ring->last = (insert + 1) % KVM_COALESCED_MMIO_MAX; - spin_unlock(&dev->kvm->ring_lock); + spin_unlock(lock); return 0; } @@ -122,6 +132,7 @@ int kvm_coalesced_mmio_init(struct kvm *kvm) */ spin_lock_init(&kvm->ring_lock); INIT_LIST_HEAD(&kvm->coalesced_zones); + INIT_LIST_HEAD(&kvm->coalesced_buffers); return 0; } @@ -132,11 +143,125 @@ void kvm_coalesced_mmio_free(struct kvm *kvm) free_page((unsigned long)kvm->coalesced_mmio_ring); } +static void coalesced_mmio_buffer_vma_close(struct vm_area_struct *vma) +{ + struct kvm_coalesced_mmio_buffer_dev *dev = vma->vm_private_data; + + spin_lock(&dev->ring_lock); + + vfree(dev->ring); + dev->ring = NULL; + + spin_unlock(&dev->ring_lock); +} + +static const struct vm_operations_struct coalesced_mmio_buffer_vm_ops = { + .close = coalesced_mmio_buffer_vma_close, +}; + +static int coalesced_mmio_buffer_mmap(struct file *file, struct vm_area_struct *vma) +{ + struct kvm_coalesced_mmio_buffer_dev *dev = file->private_data; + unsigned long pfn; + int ret = 0; + + spin_lock(&dev->ring_lock); + + if (dev->ring) { + ret = -EBUSY; + goto out_unlock; + } + + dev->ring = vmalloc_user(PAGE_SIZE); + if (!dev->ring) { + ret = -ENOMEM; + goto out_unlock; + } + + pfn = vmalloc_to_pfn(dev->ring); + + if (remap_pfn_range(vma, vma->vm_start, pfn, PAGE_SIZE, + vma->vm_page_prot)) { + vfree(dev->ring); + dev->ring = NULL; + ret = -EAGAIN; + goto out_unlock; + } + + vma->vm_ops = &coalesced_mmio_buffer_vm_ops; + vma->vm_private_data = dev; + +out_unlock: + spin_unlock(&dev->ring_lock); + + return ret; +} + +static int coalesced_mmio_buffer_release(struct inode *inode, struct file *file) +{ + + struct kvm_coalesced_mmio_buffer_dev *buffer_dev = file->private_data; + struct kvm_coalesced_mmio_dev *mmio_dev, *tmp; + struct kvm *kvm = buffer_dev->kvm; + + /* Deregister all zones associated with this ring buffer */ + mutex_lock(&kvm->slots_lock); + + list_for_each_entry_safe(mmio_dev, tmp, &kvm->coalesced_zones, list) { + if (mmio_dev->buffer_dev == buffer_dev) { + if (kvm_io_bus_unregister_dev(kvm, + mmio_dev->zone.pio ? KVM_PIO_BUS : KVM_MMIO_BUS, + &mmio_dev->dev)) + break; + } + } + + list_del(&buffer_dev->list); + kfree(buffer_dev); + + mutex_unlock(&kvm->slots_lock); + + return 0; +} + +static const struct file_operations coalesced_mmio_buffer_ops = { + .mmap = coalesced_mmio_buffer_mmap, + .release = coalesced_mmio_buffer_release, +}; + +int kvm_vm_ioctl_create_coalesced_mmio_buffer(struct kvm *kvm) +{ + int ret; + struct kvm_coalesced_mmio_buffer_dev *dev; + + dev = kzalloc(sizeof(struct kvm_coalesced_mmio_buffer_dev), + GFP_KERNEL_ACCOUNT); + if (!dev) + return -ENOMEM; + + dev->kvm = kvm; + spin_lock_init(&dev->ring_lock); + + ret = anon_inode_getfd("coalesced_mmio_buf", &coalesced_mmio_buffer_ops, + dev, O_RDWR | O_CLOEXEC); + if (ret < 0) { + kfree(dev); + return ret; + } + + mutex_lock(&kvm->slots_lock); + list_add_tail(&dev->list, &kvm->coalesced_buffers); + mutex_unlock(&kvm->slots_lock); + + return ret; +} + int kvm_vm_ioctl_register_coalesced_mmio(struct kvm *kvm, struct kvm_coalesced_mmio_zone *zone) { int ret; struct kvm_coalesced_mmio_dev *dev; + struct kvm_coalesced_mmio_buffer_dev *buffer_dev = NULL; if (zone->pio != 1 && zone->pio != 0) return -EINVAL; @@ -149,6 +274,7 @@ int kvm_vm_ioctl_register_coalesced_mmio(struct kvm *kvm, kvm_iodevice_init(&dev->dev, &coalesced_mmio_ops); dev->kvm = kvm; dev->zone = *zone; + dev->buffer_dev = buffer_dev; mutex_lock(&kvm->slots_lock); ret = kvm_io_bus_register_dev(kvm, diff --git a/virt/kvm/coalesced_mmio.h b/virt/kvm/coalesced_mmio.h index 36f84264ed25..37d9d8f325bb 100644 --- a/virt/kvm/coalesced_mmio.h +++ b/virt/kvm/coalesced_mmio.h @@ -20,6 +20,14 @@ struct kvm_coalesced_mmio_dev { struct kvm_io_device dev; struct kvm *kvm; struct kvm_coalesced_mmio_zone zone; + struct kvm_coalesced_mmio_buffer_dev *buffer_dev; +}; + +struct kvm_coalesced_mmio_buffer_dev { + struct list_head list; + struct kvm *kvm; + spinlock_t ring_lock; + struct kvm_coalesced_mmio_ring *ring; }; int kvm_coalesced_mmio_init(struct kvm *kvm); @@ -28,6 +36,7 @@ int kvm_vm_ioctl_register_coalesced_mmio(struct kvm *kvm, struct kvm_coalesced_mmio_zone *zone); int kvm_vm_ioctl_unregister_coalesced_mmio(struct kvm *kvm, struct kvm_coalesced_mmio_zone *zone); +int kvm_vm_ioctl_create_coalesced_mmio_buffer(struct kvm *kvm); #else diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 1192942aef91..54df2e88d4f4 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -5169,6 +5169,10 @@ static long kvm_vm_ioctl(struct file *filp, r = kvm_vm_ioctl_unregister_coalesced_mmio(kvm, &zone); break; } + case KVM_CREATE_COALESCED_MMIO_BUFFER: { + r = kvm_vm_ioctl_create_coalesced_mmio_buffer(kvm); + break; + } #endif case KVM_IRQFD: { struct kvm_irqfd data; From patchwork Wed Jul 10 08:52:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilias Stamatis X-Patchwork-Id: 13729066 Received: from smtp-fw-80009.amazon.com (smtp-fw-80009.amazon.com [99.78.197.220]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A9DA21662FB for ; Wed, 10 Jul 2024 08:56:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=99.78.197.220 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720601778; cv=none; b=bgCB8kTdFZEgcKOc74Oyh+ATBZsXDctSj/VcUEL8cmDZreJj+KrrQSJlCT0AiS6QydoSA0GbCO2lrNd66iP22pWQULjaoLpaStKu00oQsStuGKm1I4MIcM+M8zRWRMXgFeI15SVHgyS1GQ3KQ5+g0g+NXSZwOzFtXDRNlKSpNV8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720601778; c=relaxed/simple; bh=ZjeDvDObxXH1b1bA/g41/b+MiCmdmansuuiRTh6MWnQ=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=YZWhBGSF85+fd58UJjyyxViE+LVsmlLGoFUR7TcG3DfzAQMK5lZxeyDyRD6rgS7SRWxoD9p9h+FOgWzN9prUEMsFk5GWdSNbIOZuPQErcWvGt6QJd8PFTyuyeo4fZfciGisUiNiVwWh7H0X3NwXiVT1V7pp+cJ6xTXQBSI4XV38= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com; spf=pass smtp.mailfrom=amazon.co.uk; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b=mby4iCOD; arc=none smtp.client-ip=99.78.197.220 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=amazon.co.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="mby4iCOD" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1720601776; x=1752137776; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=h2un9lSjiEuBtfp02pKJ13FNhmDbUr4oVeo65jVHVM4=; b=mby4iCODZz60w5XcoH2EsiZuDW5Z2mRDBA8JTd2et3n++zsYahxhi1V1 KtwQ/26dVe+zUq4+njCzCOFpWjpDJK1Ps4R1o7pEnRUsk9on0p57jiFBO AdfUPqNSwzYuyuy3XC10qlVvx0UbU4IG0UIFwJSWDHOrd2wANjcKUoCTu E=; X-IronPort-AV: E=Sophos;i="6.09,197,1716249600"; d="scan'208";a="104039809" Received: from pdx4-co-svc-p1-lb2-vlan2.amazon.com (HELO smtpout.prod.us-west-2.prod.farcaster.email.amazon.dev) ([10.25.36.210]) by smtp-border-fw-80009.pdx80.corp.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Jul 2024 08:56:16 +0000 Received: from EX19MTAEUA001.ant.amazon.com [10.0.17.79:53606] by smtpin.naws.eu-west-1.prod.farcaster.email.amazon.dev [10.0.10.84:2525] with esmtp (Farcaster) id 5cb3eb48-0fbb-420d-a325-8846768b3211; Wed, 10 Jul 2024 08:56:15 +0000 (UTC) X-Farcaster-Flow-ID: 5cb3eb48-0fbb-420d-a325-8846768b3211 Received: from EX19D018EUA002.ant.amazon.com (10.252.50.146) by EX19MTAEUA001.ant.amazon.com (10.252.50.192) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Wed, 10 Jul 2024 08:56:15 +0000 Received: from u94b036d6357a55.ant.amazon.com (10.106.83.14) by EX19D018EUA002.ant.amazon.com (10.252.50.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Wed, 10 Jul 2024 08:56:10 +0000 From: Ilias Stamatis To: , CC: , , , , , , , , Subject: [PATCH 3/6] KVM: Support poll() on coalesced mmio buffer fds Date: Wed, 10 Jul 2024 09:52:56 +0100 Message-ID: <20240710085259.2125131-4-ilstam@amazon.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240710085259.2125131-1-ilstam@amazon.com> References: <20240710085259.2125131-1-ilstam@amazon.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: EX19D046UWB001.ant.amazon.com (10.13.139.187) To EX19D018EUA002.ant.amazon.com (10.252.50.146) There is no direct way for userspace to be notified about coalesced MMIO writes when using KVM_REGISTER_COALESCED_MMIO. If the next MMIO exit is when the ring buffer has filled then a substantial (and unbounded) amount of time may have passed since the first coalesced MMIO. To improve this, make it possible for userspace to use poll() and select() on the fd returned by the KVM_CREATE_COALESCED_MMIO_BUFFER ioctl. This way a userspace VMM could have dedicated threads that deal with writes to specific MMIO zones. For example, a common use of MMIO, particularly in the realm of network devices, is as a doorbell. A write to a doorbell register will trigger the device to initiate a DMA transfer. When a network device is emulated by userspace a write to a doorbell register would typically result in an MMIO exit so that userspace can emulate the DMA transfer in a timely manner. No further processing can be done until userspace performs the necessary emulation and re-invokes KVM_RUN. Even if userspace makes use of another thread to emulate the DMA transfer such MMIO exits are disruptive to the vCPU and they may also be quite frequent if, for example, the vCPU is sending a sequence of short packets to the network device. By supporting poll() on coalesced buffer fds, userspace can have dedicated threads wait for new doorbell writes and avoid the performance hit of userspace exits on the main vCPU threads. Signed-off-by: Ilias Stamatis Reviewed-by: Paul Durrant --- virt/kvm/coalesced_mmio.c | 20 ++++++++++++++++++++ virt/kvm/coalesced_mmio.h | 1 + 2 files changed, 21 insertions(+) diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c index 6443d4b62548..00439e035d74 100644 --- a/virt/kvm/coalesced_mmio.c +++ b/virt/kvm/coalesced_mmio.c @@ -16,6 +16,7 @@ #include #include #include +#include #include "coalesced_mmio.h" @@ -98,6 +99,10 @@ static int coalesced_mmio_write(struct kvm_vcpu *vcpu, smp_wmb(); ring->last = (insert + 1) % KVM_COALESCED_MMIO_MAX; spin_unlock(lock); + + if (dev->buffer_dev) + wake_up_interruptible(&dev->buffer_dev->wait_queue); + return 0; } @@ -224,9 +229,23 @@ static int coalesced_mmio_buffer_release(struct inode *inode, struct file *file) return 0; } +static __poll_t coalesced_mmio_buffer_poll(struct file *file, struct poll_table_struct *wait) +{ + struct kvm_coalesced_mmio_buffer_dev *dev = file->private_data; + __poll_t mask = 0; + + poll_wait(file, &dev->wait_queue, wait); + + if (READ_ONCE(dev->ring->first) != READ_ONCE(dev->ring->last)) + mask = POLLIN | POLLRDNORM; + + return mask; +} + static const struct file_operations coalesced_mmio_buffer_ops = { .mmap = coalesced_mmio_buffer_mmap, .release = coalesced_mmio_buffer_release, + .poll = coalesced_mmio_buffer_poll, }; int kvm_vm_ioctl_create_coalesced_mmio_buffer(struct kvm *kvm) @@ -240,6 +259,7 @@ int kvm_vm_ioctl_create_coalesced_mmio_buffer(struct kvm *kvm) return -ENOMEM; dev->kvm = kvm; + init_waitqueue_head(&dev->wait_queue); spin_lock_init(&dev->ring_lock); ret = anon_inode_getfd("coalesced_mmio_buf", &coalesced_mmio_buffer_ops, diff --git a/virt/kvm/coalesced_mmio.h b/virt/kvm/coalesced_mmio.h index 37d9d8f325bb..d1807ce26464 100644 --- a/virt/kvm/coalesced_mmio.h +++ b/virt/kvm/coalesced_mmio.h @@ -26,6 +26,7 @@ struct kvm_coalesced_mmio_dev { struct kvm_coalesced_mmio_buffer_dev { struct list_head list; struct kvm *kvm; + wait_queue_head_t wait_queue; spinlock_t ring_lock; struct kvm_coalesced_mmio_ring *ring; }; From patchwork Wed Jul 10 08:52:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilias Stamatis X-Patchwork-Id: 13729067 Received: from smtp-fw-9105.amazon.com (smtp-fw-9105.amazon.com [207.171.188.204]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9724216C6B7 for ; Wed, 10 Jul 2024 08:56:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=207.171.188.204 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720601806; cv=none; b=mu6j/1TFNYTXlYyeRtnzRC3w0fV5Xpq9OHPL4nJ3a+C/T0PReTN6kOJoHDrH4HezmzNK3dh4wOKGbjF90OnQED+CBCEemI/ARRBLl0YoZ8k6GY5aL3lajalt/7+lvQCHZA242A2XI9MyBFTugEdEzKhow03bQUPUAS/RG3emYac= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720601806; c=relaxed/simple; bh=xZRC+TSL9t1kkYmHGAHFXNLUkVZ4ZmsHu4HN+7myEDA=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=WvhkGKilVDFMcnGpjxnAZ9FcUKsGXwSjWznJ9ivq+FTO4kFTWeiySJZszEhZ3nRuthLKUcZEQFbmRV52RC/1JK8q5QPuO6DDBYgwkWXqy3Ztv/ACEDPMHrQ1ePrpK/VHks3/hoJS/ReXJtM42d6vNc8Va9Hqwvd+bOFvuuzb7PM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com; spf=pass smtp.mailfrom=amazon.co.uk; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b=d0+HcxMh; arc=none smtp.client-ip=207.171.188.204 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=amazon.co.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="d0+HcxMh" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1720601805; x=1752137805; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JPthk+H6+rV2P4lFS2mAgQOgY5YoGWBX1ScOjvxoVg8=; b=d0+HcxMhU0qR8y6WB4Vfv033TyNvEtdDkHcYlhlp0z24FMSzPP8BE8y9 b1KGAZFtUp+94kXoLw/ygUy8kD9QgfxGP1V/jPUy2W+CBSkxLw+YuMB+n NmG76dyLVob1j1fI/N1t2JfjcDUr1jKZsVwMNzspTgJExBUQRtWPxDQ71 A=; X-IronPort-AV: E=Sophos;i="6.09,197,1716249600"; d="scan'208";a="740291003" Received: from pdx4-co-svc-p1-lb2-vlan2.amazon.com (HELO smtpout.prod.us-east-1.prod.farcaster.email.amazon.dev) ([10.25.36.210]) by smtp-border-fw-9105.sea19.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Jul 2024 08:56:38 +0000 Received: from EX19MTAEUB002.ant.amazon.com [10.0.17.79:48111] by smtpin.naws.eu-west-1.prod.farcaster.email.amazon.dev [10.0.31.69:2525] with esmtp (Farcaster) id 02814e8b-f899-496e-b152-8b80ab24fc8a; Wed, 10 Jul 2024 08:56:37 +0000 (UTC) X-Farcaster-Flow-ID: 02814e8b-f899-496e-b152-8b80ab24fc8a Received: from EX19D018EUA002.ant.amazon.com (10.252.50.146) by EX19MTAEUB002.ant.amazon.com (10.252.51.79) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Wed, 10 Jul 2024 08:56:37 +0000 Received: from u94b036d6357a55.ant.amazon.com (10.106.83.14) by EX19D018EUA002.ant.amazon.com (10.252.50.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Wed, 10 Jul 2024 08:56:32 +0000 From: Ilias Stamatis To: , CC: , , , , , , , , Subject: [PATCH 4/6] KVM: Add KVM_(UN)REGISTER_COALESCED_MMIO2 ioctls Date: Wed, 10 Jul 2024 09:52:57 +0100 Message-ID: <20240710085259.2125131-5-ilstam@amazon.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240710085259.2125131-1-ilstam@amazon.com> References: <20240710085259.2125131-1-ilstam@amazon.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: EX19D046UWB001.ant.amazon.com (10.13.139.187) To EX19D018EUA002.ant.amazon.com (10.252.50.146) Add 2 new ioctls, KVM_REGISTER_COALESCED_MMIO2 and KVM_UNREGISTER_COALESCED_MMIO2. These do the same thing as their v1 equivalents except an fd returned by KVM_CREATE_COALESCED_MMIO_BUFFER needs to be passed as an argument to them. The fd representing a ring buffer is associated with an MMIO region registered for coalescing and all writes to that region are accumulated there. This is in contrast to the v1 API where all regions have to share the same buffer. Nevertheless, userspace code can still use the same ring buffer for multiple zones if it wishes to do so. Userspace can check for the availability of the new API by checking if the KVM_CAP_COALESCED_MMIO2 capability is supported. Signed-off-by: Ilias Stamatis Reviewed-by: Paul Durrant --- include/uapi/linux/kvm.h | 16 ++++++++++++++++ virt/kvm/coalesced_mmio.c | 36 ++++++++++++++++++++++++++++++------ virt/kvm/coalesced_mmio.h | 7 ++++--- virt/kvm/kvm_main.c | 36 +++++++++++++++++++++++++++++++++++- 4 files changed, 85 insertions(+), 10 deletions(-) diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 6d6f132e6203..e49dda50b639 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -467,6 +467,16 @@ struct kvm_coalesced_mmio_zone { }; }; +struct kvm_coalesced_mmio_zone2 { + __u64 addr; + __u32 size; + union { + __u32 pad; + __u32 pio; + }; + int buffer_fd; +}; + struct kvm_coalesced_mmio { __u64 phys_addr; __u32 len; @@ -917,6 +927,7 @@ struct kvm_enable_cap { #define KVM_CAP_MEMORY_ATTRIBUTES 233 #define KVM_CAP_GUEST_MEMFD 234 #define KVM_CAP_VM_TYPES 235 +#define KVM_CAP_COALESCED_MMIO2 236 struct kvm_irq_routing_irqchip { __u32 irqchip; @@ -1548,6 +1559,11 @@ struct kvm_create_guest_memfd { __u64 reserved[6]; }; +/* Available with KVM_CAP_COALESCED_MMIO2 */ #define KVM_CREATE_COALESCED_MMIO_BUFFER _IO(KVMIO, 0xd5) +#define KVM_REGISTER_COALESCED_MMIO2 \ + _IOW(KVMIO, 0xd6, struct kvm_coalesced_mmio_zone2) +#define KVM_UNREGISTER_COALESCED_MMIO2 \ + _IOW(KVMIO, 0xd7, struct kvm_coalesced_mmio_zone2) #endif /* __LINUX_KVM_H */ diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c index 00439e035d74..8d6d98c01f6e 100644 --- a/virt/kvm/coalesced_mmio.c +++ b/virt/kvm/coalesced_mmio.c @@ -277,19 +277,40 @@ int kvm_vm_ioctl_create_coalesced_mmio_buffer(struct kvm *kvm) } int kvm_vm_ioctl_register_coalesced_mmio(struct kvm *kvm, - struct kvm_coalesced_mmio_zone *zone) + struct kvm_coalesced_mmio_zone2 *zone, + bool use_buffer_fd) { - int ret; + int ret = 0; + struct file *file; struct kvm_coalesced_mmio_dev *dev; struct kvm_coalesced_mmio_buffer_dev *buffer_dev = NULL; if (zone->pio != 1 && zone->pio != 0) return -EINVAL; + if (use_buffer_fd) { + file = fget(zone->buffer_fd); + if (!file) + return -EBADF; + + if (file->f_op != &coalesced_mmio_buffer_ops) { + fput(file); + return -EINVAL; + } + + buffer_dev = file->private_data; + if (!buffer_dev->ring) { + fput(file); + return -ENOBUFS; + } + } + dev = kzalloc(sizeof(struct kvm_coalesced_mmio_dev), GFP_KERNEL_ACCOUNT); - if (!dev) - return -ENOMEM; + if (!dev) { + ret = -ENOMEM; + goto out_free_file; + } kvm_iodevice_init(&dev->dev, &coalesced_mmio_ops); dev->kvm = kvm; @@ -305,17 +326,20 @@ int kvm_vm_ioctl_register_coalesced_mmio(struct kvm *kvm, list_add_tail(&dev->list, &kvm->coalesced_zones); mutex_unlock(&kvm->slots_lock); - return 0; + goto out_free_file; out_free_dev: mutex_unlock(&kvm->slots_lock); kfree(dev); +out_free_file: + if (use_buffer_fd) + fput(file); return ret; } int kvm_vm_ioctl_unregister_coalesced_mmio(struct kvm *kvm, - struct kvm_coalesced_mmio_zone *zone) + struct kvm_coalesced_mmio_zone2 *zone) { struct kvm_coalesced_mmio_dev *dev, *tmp; int r; diff --git a/virt/kvm/coalesced_mmio.h b/virt/kvm/coalesced_mmio.h index d1807ce26464..32792adb7cb4 100644 --- a/virt/kvm/coalesced_mmio.h +++ b/virt/kvm/coalesced_mmio.h @@ -19,7 +19,7 @@ struct kvm_coalesced_mmio_dev { struct list_head list; struct kvm_io_device dev; struct kvm *kvm; - struct kvm_coalesced_mmio_zone zone; + struct kvm_coalesced_mmio_zone2 zone; struct kvm_coalesced_mmio_buffer_dev *buffer_dev; }; @@ -34,9 +34,10 @@ struct kvm_coalesced_mmio_buffer_dev { int kvm_coalesced_mmio_init(struct kvm *kvm); void kvm_coalesced_mmio_free(struct kvm *kvm); int kvm_vm_ioctl_register_coalesced_mmio(struct kvm *kvm, - struct kvm_coalesced_mmio_zone *zone); + struct kvm_coalesced_mmio_zone2 *zone, + bool use_buffer_fd); int kvm_vm_ioctl_unregister_coalesced_mmio(struct kvm *kvm, - struct kvm_coalesced_mmio_zone *zone); + struct kvm_coalesced_mmio_zone2 *zone); int kvm_vm_ioctl_create_coalesced_mmio_buffer(struct kvm *kvm); #else diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 54df2e88d4f4..683b5d392b5f 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -4815,6 +4815,7 @@ static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg) #ifdef CONFIG_KVM_MMIO case KVM_CAP_COALESCED_MMIO: return KVM_COALESCED_MMIO_PAGE_OFFSET; + case KVM_CAP_COALESCED_MMIO2: case KVM_CAP_COALESCED_PIO: return 1; #endif @@ -5153,15 +5154,48 @@ static long kvm_vm_ioctl(struct file *filp, #ifdef CONFIG_KVM_MMIO case KVM_REGISTER_COALESCED_MMIO: { struct kvm_coalesced_mmio_zone zone; + struct kvm_coalesced_mmio_zone2 zone2; r = -EFAULT; if (copy_from_user(&zone, argp, sizeof(zone))) goto out; - r = kvm_vm_ioctl_register_coalesced_mmio(kvm, &zone); + + zone2.addr = zone.addr; + zone2.size = zone.size; + zone2.pio = zone.pio; + zone2.buffer_fd = -1; + + r = kvm_vm_ioctl_register_coalesced_mmio(kvm, &zone2, false); + break; + } + case KVM_REGISTER_COALESCED_MMIO2: { + struct kvm_coalesced_mmio_zone2 zone; + + r = -EFAULT; + if (copy_from_user(&zone, argp, sizeof(zone))) + goto out; + + r = kvm_vm_ioctl_register_coalesced_mmio(kvm, &zone, true); break; } case KVM_UNREGISTER_COALESCED_MMIO: { struct kvm_coalesced_mmio_zone zone; + struct kvm_coalesced_mmio_zone2 zone2; + + r = -EFAULT; + if (copy_from_user(&zone, argp, sizeof(zone))) + goto out; + + zone2.addr = zone.addr; + zone2.size = zone.size; + zone2.pio = zone.pio; + zone2.buffer_fd = -1; + + r = kvm_vm_ioctl_unregister_coalesced_mmio(kvm, &zone2); + break; + } + case KVM_UNREGISTER_COALESCED_MMIO2: { + struct kvm_coalesced_mmio_zone2 zone; r = -EFAULT; if (copy_from_user(&zone, argp, sizeof(zone))) From patchwork Wed Jul 10 08:52:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilias Stamatis X-Patchwork-Id: 13729068 Received: from smtp-fw-52003.amazon.com (smtp-fw-52003.amazon.com [52.119.213.152]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7790917E902 for ; Wed, 10 Jul 2024 08:57:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=52.119.213.152 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720601824; cv=none; b=UPCTzlSDZYZQ3wvW3KkZzbe57BlWTbhRYnOD01XPcXJ1m2iFobW38YYOwfwzTqJxR6s7C2Hjtlgm8MfnGrEUTtZTDN7fCV5kFvfwiMwOoZ+zMpYFGTUQpQSGl6SHMXsAsBeRHedce7szHPqBfZ95+bbkpL77krh9XSkduqUHiM0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720601824; c=relaxed/simple; bh=q5m9uBzKOosJMjy/0qA1tSy4vAXuR+VSAp2IWOvgRek=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=I3ESA9M88tDyp9sUeXqlhMxKJPCNjUQmPDEMHTPu/j0pctXHzeaVIZGrqWkK59RdpktXRuFkl6dqsGZ0ZIwLjDHawRtKB51AUsH1WS/6nXOb9GsEEDRZNFF+Nhi5nDU/s6joG2nPXli814dVNmFUpJgX0yaEKS9cXsgB+B7kbdc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com; spf=pass smtp.mailfrom=amazon.co.uk; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b=QV9wJQ+N; arc=none smtp.client-ip=52.119.213.152 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=amazon.co.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="QV9wJQ+N" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1720601822; x=1752137822; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=y29dI1FLgJ8e1P/Z82NcJYc39l8J0EY6oax+UZ24hRQ=; b=QV9wJQ+Np1Q59aW5fKCEIj6+X7c1D/jwRhUkF8ajR4jbRIYKZD8JGnPW 4osAyAxP4PCgvpaHjjoVtlVJcbkPHbuyGeOYgDn/nxw/eP3o29jzES/4s M0i2d8woVEVPwFb2oVwLXrGeKXQRWKbX/6m31ueMc9IB9NL1bl/3Ub1HT 0=; X-IronPort-AV: E=Sophos;i="6.09,197,1716249600"; d="scan'208";a="10819737" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO smtpout.prod.us-east-1.prod.farcaster.email.amazon.dev) ([10.43.8.6]) by smtp-border-fw-52003.iad7.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Jul 2024 08:56:58 +0000 Received: from EX19MTAEUC001.ant.amazon.com [10.0.43.254:17242] by smtpin.naws.eu-west-1.prod.farcaster.email.amazon.dev [10.0.42.47:2525] with esmtp (Farcaster) id bf845b6e-3305-425d-9f6d-ca8a032a55ec; Wed, 10 Jul 2024 08:56:57 +0000 (UTC) X-Farcaster-Flow-ID: bf845b6e-3305-425d-9f6d-ca8a032a55ec Received: from EX19D018EUA002.ant.amazon.com (10.252.50.146) by EX19MTAEUC001.ant.amazon.com (10.252.51.193) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Wed, 10 Jul 2024 08:56:57 +0000 Received: from u94b036d6357a55.ant.amazon.com (10.106.83.14) by EX19D018EUA002.ant.amazon.com (10.252.50.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Wed, 10 Jul 2024 08:56:52 +0000 From: Ilias Stamatis To: , CC: , , , , , , , , Subject: [PATCH 5/6] KVM: Documentation: Document v2 of coalesced MMIO API Date: Wed, 10 Jul 2024 09:52:58 +0100 Message-ID: <20240710085259.2125131-6-ilstam@amazon.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240710085259.2125131-1-ilstam@amazon.com> References: <20240710085259.2125131-1-ilstam@amazon.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: EX19D046UWB001.ant.amazon.com (10.13.139.187) To EX19D018EUA002.ant.amazon.com (10.252.50.146) Document the KVM_CREATE_COALESCED_MMIO_BUFFER and KVM_REGISTER_COALESCED_MMIO2 ioctls. Signed-off-by: Ilias Stamatis Reviewed-by: Paul Durrant --- Documentation/virt/kvm/api.rst | 91 ++++++++++++++++++++++++++++++++++ 1 file changed, 91 insertions(+) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index a71d91978d9e..e91c3cae3621 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -4913,6 +4913,8 @@ For the definition of struct kvm_nested_state, see KVM_GET_NESTED_STATE. :Parameters: struct kvm_coalesced_mmio_zone :Returns: 0 on success, < 0 on error +KVM_(UN)REGISTER_COALESCED_MMIO2 can be used instead if available. + Coalesced I/O is a performance optimization that defers hardware register write emulation so that userspace exits are avoided. It is typically used to reduce the overhead of emulating frequently accessed @@ -6352,6 +6354,95 @@ a single guest_memfd file, but the bound ranges must not overlap). See KVM_SET_USER_MEMORY_REGION2 for additional details. +4.143 KVM_CREATE_COALESCED_MMIO_BUFFER +------------------------------------- + +:Capability: KVM_CAP_COALESCED_MMIO2 +:Architectures: all +:Type: vm ioctl +:Parameters: none +:Returns: An fd on success, < 0 on error + +Returns an fd, but does not allocate a buffer. Also see +KVM_REGISTER_COALESCED_MMIO2. + +The fd must be first passed to mmap() to allocate a page to be used as a ring +buffer that is shared between kernel and userspace. The page must be +interpreted as a struct kvm_coalesced_mmio_ring. + +:: + + struct kvm_coalesced_mmio_ring { + __u32 first, last; + struct kvm_coalesced_mmio coalesced_mmio[]; + }; + +The kernel will increment the last index and userspace is expected to do the +same with the first index after consuming entries. The upper bound of the +coalesced_mmio array is defined as KVM_COALESCED_MMIO_MAX. + +:: + + struct kvm_coalesced_mmio { + __u64 phys_addr; + __u32 len; + union { + __u32 pad; + __u32 pio; + }; + __u8 data[8]; + }; + +After allocating a buffer with mmap(), the fd must be passed as an argument to +KVM_REGISTER_COALESCED_MMIO2 to associate an I/O region to which writes are +coalesced with the ring buffer. Multiple I/O regions can be associated with the +same ring buffer. + +poll() is also supported on the fd so that userspace can be notified of I/O +writes without having to wait until the next exit to userspace. + +4.144 KVM_(UN)REGISTER_COALESCED_MMIO2 +------------------------------------- + +:Capability: KVM_CAP_COALESCED_MMIO2 +:Architectures: all +:Type: vm ioctl +:Parameters: struct kvm_coalesced_mmio_zone2 +:Returns: 0 on success, < 0 on error + +Coalesced I/O is a performance optimization that defers hardware register write +emulation so that userspace exits are avoided. It is typically used to reduce +the overhead of emulating frequently accessed hardware registers. + +When a hardware register is configured for coalesced I/O, write accesses do not +exit to userspace and their value is recorded in a ring buffer that is shared +between kernel and userspace. + +:: + + struct kvm_coalesced_mmio_zone2 { + __u64 addr; + __u32 size; + union { + __u32 pad; + __u32 pio; + }; + int buffer_fd; + }; + +KVM_CREATE_COALESCED_MMIO_BUFFER must be used to allocate a buffer fd which +must be first mmaped before passed to KVM_REGISTER_COALESCED_MMIO2, otherwise +the ioctl will fail. + +Coalesced I/O is used if one or more write accesses to a hardware register can +be deferred until a read or a write to another hardware register on the same +device. This last access will cause a vmexit and userspace will process +accesses from the ring buffer before emulating it. That will avoid exiting to +userspace on repeated writes. + +Alternatively, userspace can call poll() on the buffer fd if it wishes to be +notified of new I/O writes that way. + 5. The kvm_run structure ======================== From patchwork Wed Jul 10 08:52:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilias Stamatis X-Patchwork-Id: 13729103 Received: from smtp-fw-6001.amazon.com (smtp-fw-6001.amazon.com [52.95.48.154]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 307C417C21C for ; Wed, 10 Jul 2024 08:57:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=52.95.48.154 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720601877; cv=none; b=UpuYF4mZ2nbB4H/dGUP+e48RRmha2yfaFATBZGPr/ilHlyvSPRuIR3QzEc+uG6gRTxVrH3sWN6/1wZJPnBUxVIJ8QyRmIGVJDq9eJgxwY2L2g29Q+ZUxdnvHAFEor3SR1/CImCjRcKzzLSO+A/5rhakgrrmzHrdukpF1qCrimJg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720601877; c=relaxed/simple; bh=GDMRRtcrMc6/LJV1KzJ9WfY+1fATLhlNSSB17E//anE=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=biQv/AOYUJwbq5R8NWwrOP4avbeHF+lVEcNRbwNRdHLkoGNy+a+ZFzpBIItvzvxrer2fc66zofjNxGwHx/luKxpr9FE6rj8uplaarSYAch9QiQh2RppTi+qoXyvTkjJrs3noYXURuESwmZFlX/Vpo0KxHMi3GQ/978gsTa13cS0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com; spf=pass smtp.mailfrom=amazon.co.uk; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b=NZRjuTaZ; arc=none smtp.client-ip=52.95.48.154 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=amazon.co.uk Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="NZRjuTaZ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1720601875; x=1752137875; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JIsXSuxz6CZJcLiFyoSHOuxgG63Z6O+Vdyl0C0Dt8Uc=; b=NZRjuTaZXKvNe6ZGeqOoT71rUGBoNJwU35LBFzvsklK2M7sbbwHjWi/0 2KyjvJMhbM42Q5M4jNRSWds8BcgTtfw1tqiPiYIRhY6FCUb2lW5bx8Bcb 9JlV7gRSwrMkKP3LQT0Wd9PjQG4l4ke0EmmNenTzFihddpK9XnItxr03J c=; X-IronPort-AV: E=Sophos;i="6.09,197,1716249600"; d="scan'208";a="409139410" Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO smtpout.prod.us-west-2.prod.farcaster.email.amazon.dev) ([10.43.8.2]) by smtp-border-fw-6001.iad6.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Jul 2024 08:57:53 +0000 Received: from EX19MTAEUA001.ant.amazon.com [10.0.17.79:18172] by smtpin.naws.eu-west-1.prod.farcaster.email.amazon.dev [10.0.20.30:2525] with esmtp (Farcaster) id 4f1d4f5e-0cae-4486-a272-206c08f5a42a; Wed, 10 Jul 2024 08:57:50 +0000 (UTC) X-Farcaster-Flow-ID: 4f1d4f5e-0cae-4486-a272-206c08f5a42a Received: from EX19D018EUA002.ant.amazon.com (10.252.50.146) by EX19MTAEUA001.ant.amazon.com (10.252.50.192) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Wed, 10 Jul 2024 08:57:46 +0000 Received: from u94b036d6357a55.ant.amazon.com (10.106.83.14) by EX19D018EUA002.ant.amazon.com (10.252.50.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Wed, 10 Jul 2024 08:57:41 +0000 From: Ilias Stamatis To: , CC: , , , , , , , , Subject: [PATCH 6/6] KVM: selftests: Add coalesced_mmio_test Date: Wed, 10 Jul 2024 09:52:59 +0100 Message-ID: <20240710085259.2125131-7-ilstam@amazon.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240710085259.2125131-1-ilstam@amazon.com> References: <20240710085259.2125131-1-ilstam@amazon.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: EX19D046UWB001.ant.amazon.com (10.13.139.187) To EX19D018EUA002.ant.amazon.com (10.252.50.146) Test the KVM_CREATE_COALESCED_MMIO_BUFFER, KVM_REGISTER_COALESCED_MMIO2 and KVM_UNREGISTER_COALESCED_MMIO2 ioctls. Signed-off-by: Ilias Stamatis --- tools/testing/selftests/kvm/Makefile | 1 + .../selftests/kvm/coalesced_mmio_test.c | 310 ++++++++++++++++++ 2 files changed, 311 insertions(+) create mode 100644 tools/testing/selftests/kvm/coalesced_mmio_test.c diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile index ac280dcba996..ddd2c3450348 100644 --- a/tools/testing/selftests/kvm/Makefile +++ b/tools/testing/selftests/kvm/Makefile @@ -145,6 +145,7 @@ TEST_GEN_PROGS_x86_64 += set_memory_region_test TEST_GEN_PROGS_x86_64 += steal_time TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test TEST_GEN_PROGS_x86_64 += system_counter_offset_test +TEST_GEN_PROGS_x86_64 += coalesced_mmio_test # Compiled outputs used by test targets TEST_GEN_PROGS_EXTENDED_x86_64 += x86_64/nx_huge_pages_test diff --git a/tools/testing/selftests/kvm/coalesced_mmio_test.c b/tools/testing/selftests/kvm/coalesced_mmio_test.c new file mode 100644 index 000000000000..7f57d5925ba0 --- /dev/null +++ b/tools/testing/selftests/kvm/coalesced_mmio_test.c @@ -0,0 +1,310 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright 2024 Amazon.com, Inc. or its affiliates. All Rights Reserved. + * + * Test the KVM_CREATE_COALESCED_MMIO_BUFFER, KVM_REGISTER_COALESCED_MMIO2 and + * KVM_UNREGISTER_COALESCED_MMIO2 ioctls by making sure that MMIO writes to + * associated zones end up in the correct ring buffer. Also test that we don't + * exit to userspace when there is space in the corresponding buffer. + */ + +#include +#include + +#define PAGE_SIZE 4096 + +/* + * Somewhat arbitrary location and slot, intended to not overlap anything. + */ +#define MEM_REGION_SLOT 10 +#define MEM_REGION_GPA 0xc0000000UL +#define MEM_REGION_SIZE (PAGE_SIZE * 2) +#define MEM_REGION_PAGES DIV_ROUND_UP(MEM_REGION_SIZE, PAGE_SIZE) + +#define COALESCING_ZONE1_GPA MEM_REGION_GPA +#define COALESCING_ZONE1_SIZE PAGE_SIZE +#define COALESCING_ZONE2_GPA (COALESCING_ZONE1_GPA + COALESCING_ZONE1_SIZE) +#define COALESCING_ZONE2_SIZE PAGE_SIZE + +#define MMIO_WRITE_DATA 0xbaad +#define MMIO_WRITE_DATA2 0xbeef + +#define BATCH_SIZE 4 + +static void guest_code(void) +{ + uint64_t *gpa; + + /* + * The first write should result in an exit + */ + gpa = (uint64_t *)(MEM_REGION_GPA); + WRITE_ONCE(*gpa, MMIO_WRITE_DATA); + + /* + * These writes should be stored in a coalescing ring buffer and only + * the last one should result in an exit. + */ + for (int i = 0; i < KVM_COALESCED_MMIO_MAX; i++) { + gpa = (uint64_t *)(COALESCING_ZONE1_GPA + i * sizeof(*gpa)); + WRITE_ONCE(*gpa, MMIO_WRITE_DATA + i); + + /* Let's throw a PIO into the mix */ + if (i == KVM_COALESCED_MMIO_MAX / 2) + GUEST_SYNC(0); + } + + /* + * These writes should be stored in two separate ring buffers and they + * shouldn't result in an exit. + */ + for (int i = 0; i < BATCH_SIZE; i++) { + gpa = (uint64_t *)(COALESCING_ZONE1_GPA + i * sizeof(*gpa)); + WRITE_ONCE(*gpa, MMIO_WRITE_DATA + i); + + gpa = (uint64_t *)(COALESCING_ZONE2_GPA + i * sizeof(*gpa)); + WRITE_ONCE(*gpa, MMIO_WRITE_DATA2 + i); + } + + GUEST_SYNC(0); + + /* + * These writes should be stored in the same ring buffer and they + * shouldn't result in an exit. + */ + for (int i = 0; i < BATCH_SIZE; i++) { + if (i < BATCH_SIZE / 2) + gpa = (uint64_t *)(COALESCING_ZONE1_GPA + i * sizeof(*gpa)); + else + gpa = (uint64_t *)(COALESCING_ZONE2_GPA + i * sizeof(*gpa)); + + WRITE_ONCE(*gpa, MMIO_WRITE_DATA2 + i); + } + + GUEST_SYNC(0); + + /* + * This last write should result in an exit because the host should + * have disabled I/O coalescing by now. + */ + gpa = (uint64_t *)(COALESCING_ZONE1_GPA); + WRITE_ONCE(*gpa, MMIO_WRITE_DATA); +} + +static void assert_mmio_write(struct kvm_vcpu *vcpu, uint64_t addr, uint64_t value) +{ + uint64_t data; + + TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_MMIO); + TEST_ASSERT(vcpu->run->mmio.is_write, "Got MMIO read, not MMIO write"); + + memcpy(&data, vcpu->run->mmio.data, vcpu->run->mmio.len); + TEST_ASSERT_EQ(vcpu->run->mmio.phys_addr, addr); + TEST_ASSERT_EQ(value, data); +} + +static void assert_ucall_exit(struct kvm_vcpu *vcpu, uint64_t command) +{ + uint64_t cmd; + struct ucall uc; + + TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO); + cmd = get_ucall(vcpu, &uc); + TEST_ASSERT_EQ(cmd, command); +} + +static void assert_ring_entries(struct kvm_coalesced_mmio_ring *ring, + uint32_t nentries, + uint64_t addr, + uint64_t value) +{ + uint64_t data; + + for (int i = READ_ONCE(ring->first); i < nentries; i++) { + TEST_ASSERT_EQ(READ_ONCE(ring->coalesced_mmio[i].len), + sizeof(data)); + memcpy(&data, ring->coalesced_mmio[i].data, + READ_ONCE(ring->coalesced_mmio[i].len)); + + TEST_ASSERT_EQ(READ_ONCE(ring->coalesced_mmio[i].phys_addr), + addr + i * sizeof(data)); + TEST_ASSERT_EQ(data, value + i); + } +} + +int main(int argc, char *argv[]) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + uint64_t gpa; + struct kvm_coalesced_mmio_ring *ring, *ring2; + struct kvm_coalesced_mmio_zone2 zone, zone2; + int ring_fd, ring_fd2; + int r; + + TEST_REQUIRE(kvm_has_cap(KVM_CAP_COALESCED_MMIO2)); + TEST_REQUIRE(kvm_has_cap(KVM_CAP_READONLY_MEM)); + TEST_ASSERT(BATCH_SIZE * 2 <= KVM_COALESCED_MMIO_MAX, + "KVM_COALESCED_MMIO_MAX too small"); + + vm = vm_create_with_one_vcpu(&vcpu, guest_code); + + vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, MEM_REGION_GPA, + MEM_REGION_SLOT, MEM_REGION_PAGES, + KVM_MEM_READONLY); + + gpa = vm_phy_pages_alloc(vm, MEM_REGION_PAGES, MEM_REGION_GPA, + MEM_REGION_SLOT); + TEST_ASSERT(gpa == MEM_REGION_GPA, "Failed vm_phy_pages_alloc"); + + virt_map(vm, MEM_REGION_GPA, MEM_REGION_GPA, MEM_REGION_PAGES); + + /* + * Test that allocating an fd and memory mapping it works + */ + ring_fd = __vm_ioctl(vm, KVM_CREATE_COALESCED_MMIO_BUFFER, NULL); + TEST_ASSERT(ring_fd != -1, "Failed KVM_CREATE_COALESCED_MMIO_BUFFER"); + + ring = mmap(NULL, PAGE_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, + ring_fd, 0); + TEST_ASSERT(ring != MAP_FAILED, "Failed to allocate ring buffer"); + + /* + * Test that trying to map the same fd again fails + */ + ring2 = mmap(NULL, PAGE_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, + ring_fd, 0); + TEST_ASSERT(ring2 == MAP_FAILED && errno == EBUSY, + "Mapping the same fd again should fail with EBUSY"); + + /* + * Test that the first and last ring indices are zero + */ + TEST_ASSERT_EQ(READ_ONCE(ring->first), 0); + TEST_ASSERT_EQ(READ_ONCE(ring->last), 0); + + /* + * Run the vCPU and make sure the first MMIO write results in a + * userspace exit since we have not setup MMIO coalescing yet. + */ + vcpu_run(vcpu); + assert_mmio_write(vcpu, MEM_REGION_GPA, MMIO_WRITE_DATA); + + /* + * Let's actually setup MMIO coalescing now... + */ + zone.addr = COALESCING_ZONE1_GPA; + zone.size = COALESCING_ZONE1_SIZE; + zone.buffer_fd = ring_fd; + r = __vm_ioctl(vm, KVM_REGISTER_COALESCED_MMIO2, &zone); + TEST_ASSERT(r != -1, "Failed KVM_REGISTER_COALESCED_MMIO2"); + + /* + * The guest will start doing MMIO writes in the coalesced regions but + * will also do a ucall when the buffer is half full. The first + * userspace exit should be due to the ucall and not an MMIO exit. + */ + vcpu_run(vcpu); + assert_ucall_exit(vcpu, UCALL_SYNC); + TEST_ASSERT_EQ(READ_ONCE(ring->first), 0); + TEST_ASSERT_EQ(READ_ONCE(ring->last), KVM_COALESCED_MMIO_MAX / 2 + 1); + + /* + * Run the guest again. Next exit should be when the buffer is full. + * One entry always remains unused. + */ + vcpu_run(vcpu); + assert_mmio_write(vcpu, + COALESCING_ZONE1_GPA + (KVM_COALESCED_MMIO_MAX - 1) * sizeof(uint64_t), + MMIO_WRITE_DATA + KVM_COALESCED_MMIO_MAX - 1); + TEST_ASSERT_EQ(READ_ONCE(ring->first), 0); + TEST_ASSERT_EQ(READ_ONCE(ring->last), KVM_COALESCED_MMIO_MAX - 1); + + assert_ring_entries(ring, KVM_COALESCED_MMIO_MAX - 1, + COALESCING_ZONE1_GPA, MMIO_WRITE_DATA); + + /* + * Let's setup another region with a separate buffer + */ + ring_fd2 = __vm_ioctl(vm, KVM_CREATE_COALESCED_MMIO_BUFFER, NULL); + TEST_ASSERT(ring_fd != -1, "Failed KVM_CREATE_COALESCED_MMIO_BUFFER"); + + ring2 = mmap(NULL, PAGE_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, + ring_fd2, 0); + TEST_ASSERT(ring2 != MAP_FAILED, "Failed to allocate ring buffer"); + + zone2.addr = COALESCING_ZONE2_GPA; + zone2.size = COALESCING_ZONE2_SIZE; + zone2.buffer_fd = ring_fd2; + r = __vm_ioctl(vm, KVM_REGISTER_COALESCED_MMIO2, &zone2); + TEST_ASSERT(r != -1, "Failed KVM_REGISTER_COALESCED_MMIO2"); + + /* + * Move the consumer pointer of the first ring forward. + * + * When re-entering the vCPU the guest will write BATCH_SIZE + * times to each MMIO zone. + */ + WRITE_ONCE(ring->first, + READ_ONCE(ring->first) + BATCH_SIZE % KVM_COALESCED_MMIO_MAX); + + vcpu_run(vcpu); + assert_ucall_exit(vcpu, UCALL_SYNC); + + TEST_ASSERT_EQ(READ_ONCE(ring->first), BATCH_SIZE); + TEST_ASSERT_EQ(READ_ONCE(ring->last), + (KVM_COALESCED_MMIO_MAX - 1 + BATCH_SIZE) % KVM_COALESCED_MMIO_MAX); + TEST_ASSERT_EQ(READ_ONCE(ring2->first), 0); + TEST_ASSERT_EQ(READ_ONCE(ring2->last), BATCH_SIZE); + + assert_ring_entries(ring, BATCH_SIZE, COALESCING_ZONE1_GPA, MMIO_WRITE_DATA); + assert_ring_entries(ring2, BATCH_SIZE, COALESCING_ZONE2_GPA, MMIO_WRITE_DATA2); + + /* + * Unregister zone 2 and register it again but this time use the same + * ring buffer used for zone 1. + */ + r = __vm_ioctl(vm, KVM_UNREGISTER_COALESCED_MMIO2, &zone2); + TEST_ASSERT(r != -1, "Failed KVM_UNREGISTER_COALESCED_MMIO2"); + + zone2.buffer_fd = ring_fd; + r = __vm_ioctl(vm, KVM_REGISTER_COALESCED_MMIO2, &zone2); + TEST_ASSERT(r != -1, "Failed KVM_REGISTER_COALESCED_MMIO2"); + + /* + * Enter the vCPU again. This time writes to both regions should go + * to the same ring buffer. + */ + WRITE_ONCE(ring->first, + READ_ONCE(ring->first) + BATCH_SIZE % KVM_COALESCED_MMIO_MAX); + + vcpu_run(vcpu); + assert_ucall_exit(vcpu, UCALL_SYNC); + + TEST_ASSERT_EQ(READ_ONCE(ring->first), BATCH_SIZE * 2); + TEST_ASSERT_EQ(READ_ONCE(ring->last), + (KVM_COALESCED_MMIO_MAX - 1 + BATCH_SIZE * 2) % KVM_COALESCED_MMIO_MAX); + + /* + * Test that munmap and close work. + */ + r = munmap(ring, PAGE_SIZE); + TEST_ASSERT(r == 0, "Failed to munmap()"); + r = close(ring_fd); + TEST_ASSERT(r == 0, "Failed to close()"); + + r = munmap(ring2, PAGE_SIZE); + TEST_ASSERT(r == 0, "Failed to munmap()"); + r = close(ring_fd2); + TEST_ASSERT(r == 0, "Failed to close()"); + + /* + * close() should have also deregistered all I/O regions associated + * with the ring buffer automatically. Make sure that when the guest + * writes to the region again this results in an immediate exit. + */ + vcpu_run(vcpu); + assert_mmio_write(vcpu, COALESCING_ZONE1_GPA, MMIO_WRITE_DATA); + + return 0; +}