From patchwork Wed Apr 4 12:54:04 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wei Xu X-Patchwork-Id: 10322605 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 0644E6053F for ; Wed, 4 Apr 2018 13:09:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E80D328D3E for ; Wed, 4 Apr 2018 13:09:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DCDF428D42; Wed, 4 Apr 2018 13:09:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 43AC928D3E for ; Wed, 4 Apr 2018 13:09:08 +0000 (UTC) Received: from localhost ([::1]:56945 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f3i9z-0003cd-8i for patchwork-qemu-devel@patchwork.kernel.org; Wed, 04 Apr 2018 09:09:07 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:54461) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f3hxJ-0001NK-Ln for qemu-devel@nongnu.org; Wed, 04 Apr 2018 08:56:03 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f3hxH-0005oJ-9R for qemu-devel@nongnu.org; Wed, 04 Apr 2018 08:56:01 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:41096 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1f3hxG-0005mt-PM for qemu-devel@nongnu.org; Wed, 04 Apr 2018 08:55:59 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 1F43E84250; Wed, 4 Apr 2018 12:55:58 +0000 (UTC) Received: from localhost.localdomain (dhcp-14-122.nay.redhat.com [10.66.14.122]) by smtp.corp.redhat.com (Postfix) with ESMTP id E1E15215CDAF; Wed, 4 Apr 2018 12:55:55 +0000 (UTC) From: wexu@redhat.com To: wexu@redhat.com, jasowang@redhat.com, mst@redhat.com, tiwei.bie@intel.com, jfreimann@redhat.com, qemu-devel@nongnu.org Date: Wed, 4 Apr 2018 20:54:04 +0800 Message-Id: <1522846444-31725-9-git-send-email-wexu@redhat.com> In-Reply-To: <1522846444-31725-1-git-send-email-wexu@redhat.com> References: <1522846444-31725-1-git-send-email-wexu@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.2]); Wed, 04 Apr 2018 12:55:58 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.2]); Wed, 04 Apr 2018 12:55:58 +0000 (UTC) for IP:'10.11.54.6' DOMAIN:'int-mx06.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'wexu@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.187.233.73 Subject: [Qemu-devel] [PATCH 8/8] virtio: queue pop support for packed ring X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Wei Xu cloned from split ring pop, a global static length array and the inside-element length array are introduced to easy prototype, this consumes more memory and it is valuable to move to dynamic allocation as the out/in sg does. Signed-off-by: Wei Xu --- hw/virtio/virtio.c | 154 ++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 153 insertions(+), 1 deletion(-) diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c index cf726f3..0eafb38 100644 --- a/hw/virtio/virtio.c +++ b/hw/virtio/virtio.c @@ -1221,7 +1221,7 @@ static void *virtqueue_alloc_element(size_t sz, unsigned out_num, unsigned in_nu return elem; } -void *virtqueue_pop(VirtQueue *vq, size_t sz) +static void *virtqueue_pop_split(VirtQueue *vq, size_t sz) { unsigned int i, head, max; VRingMemoryRegionCaches *caches; @@ -1356,6 +1356,158 @@ err_undo_map: goto done; } +static uint16_t dma_len[VIRTQUEUE_MAX_SIZE]; +static void *virtqueue_pop_packed(VirtQueue *vq, size_t sz) +{ + unsigned int i, head, max; + VRingMemoryRegionCaches *caches; + MemoryRegionCache indirect_desc_cache = MEMORY_REGION_CACHE_INVALID; + MemoryRegionCache *cache; + int64_t len; + VirtIODevice *vdev = vq->vdev; + VirtQueueElement *elem = NULL; + unsigned out_num, in_num, elem_entries; + hwaddr addr[VIRTQUEUE_MAX_SIZE]; + struct iovec iov[VIRTQUEUE_MAX_SIZE]; + VRingDescPacked desc; + uint8_t wrap_counter; + + if (unlikely(vdev->broken)) { + return NULL; + } + + vq->last_avail_idx %= vq->packed.num; + + rcu_read_lock(); + if (virtio_queue_empty_packed_rcu(vq)) { + goto done; + } + + /* When we start there are none of either input nor output. */ + out_num = in_num = elem_entries = 0; + + max = vq->vring.num; + + if (vq->inuse >= vq->vring.num) { + virtio_error(vdev, "Virtqueue size exceeded"); + goto done; + } + + if (virtio_vdev_has_feature(vdev, VIRTIO_RING_F_EVENT_IDX)) { + /* FIXME: TBD */ + } + + head = vq->last_avail_idx; + i = head; + + caches = vring_get_region_caches(vq); + cache = &caches->desc_packed; + vring_desc_read_packed(vdev, &desc, cache, i); + if (desc.flags & VRING_DESC_F_INDIRECT) { + if (desc.len % sizeof(VRingDescPacked)) { + virtio_error(vdev, "Invalid size for indirect buffer table"); + goto done; + } + + /* loop over the indirect descriptor table */ + len = address_space_cache_init(&indirect_desc_cache, vdev->dma_as, + desc.addr, desc.len, false); + cache = &indirect_desc_cache; + if (len < desc.len) { + virtio_error(vdev, "Cannot map indirect buffer"); + goto done; + } + + max = desc.len / sizeof(VRingDescPacked); + i = 0; + vring_desc_read_packed(vdev, &desc, cache, i); + } + + wrap_counter = vq->wrap_counter; + /* Collect all the descriptors */ + while (1) { + bool map_ok; + + if (desc.flags & VRING_DESC_F_WRITE) { + map_ok = virtqueue_map_desc(vdev, &in_num, addr + out_num, + iov + out_num, + VIRTQUEUE_MAX_SIZE - out_num, true, + desc.addr, desc.len); + } else { + if (in_num) { + virtio_error(vdev, "Incorrect order for descriptors"); + goto err_undo_map; + } + map_ok = virtqueue_map_desc(vdev, &out_num, addr, iov, + VIRTQUEUE_MAX_SIZE, false, + desc.addr, desc.len); + } + if (!map_ok) { + goto err_undo_map; + } + + /* If we've got too many, that implies a descriptor loop. */ + if (++elem_entries > max) { + virtio_error(vdev, "Looped descriptor"); + goto err_undo_map; + } + + dma_len[i++] = desc.len; + /* Toggle wrap_counter for non indirect desc */ + if ((i == vq->packed.num) && (cache != &indirect_desc_cache)) { + vq->wrap_counter ^= 1; + } + + if (desc.flags & VRING_DESC_F_NEXT) { + vring_desc_read_packed(vq->vdev, &desc, cache, i % vq->packed.num); + } else { + break; + } + } + + /* Now copy what we have collected and mapped */ + elem = virtqueue_alloc_element(sz, out_num, in_num); + elem->index = head; + elem->wrap_counter = wrap_counter; + elem->count = (cache == &indirect_desc_cache) ? 1 : out_num + in_num; + for (i = 0; i < out_num; i++) { + /* DMA Done by marking the length as 0 */ + elem->len[i] = 0; + elem->out_addr[i] = addr[i]; + elem->out_sg[i] = iov[i]; + } + for (i = 0; i < in_num; i++) { + elem->len[out_num + i] = dma_len[head + out_num + i]; + elem->in_addr[i] = addr[out_num + i]; + elem->in_sg[i] = iov[out_num + i]; + } + + vq->last_avail_idx += elem->count; + vq->last_avail_idx %= vq->packed.num; + vq->inuse += elem->count; + + trace_virtqueue_pop(vq, elem, elem->in_num, elem->out_num); +done: + address_space_cache_destroy(&indirect_desc_cache); + rcu_read_unlock(); + + return elem; + +err_undo_map: + virtqueue_undo_map_desc(out_num, in_num, iov); + g_free(elem); + goto done; +} + +void *virtqueue_pop(VirtQueue *vq, size_t sz) +{ + if (virtio_vdev_has_feature(vq->vdev, VIRTIO_F_RING_PACKED)) { + return virtqueue_pop_packed(vq, sz); + } else { + return virtqueue_pop_split(vq, sz); + } +} + /* virtqueue_drop_all: * @vq: The #VirtQueue * Drops all queued buffers and indicates them to the guest