From patchwork Sun Jan 31 10:29:01 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 8173261 Return-Path: X-Original-To: patchwork-qemu-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id C21D49F96D for ; Sun, 31 Jan 2016 10:31:19 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id EFAF0202FE for ; Sun, 31 Jan 2016 10:31:18 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1D5F0202F0 for ; Sun, 31 Jan 2016 10:31:18 +0000 (UTC) Received: from localhost ([::1]:41115 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aPpHp-0002Kk-Ge for patchwork-qemu-devel@patchwork.kernel.org; Sun, 31 Jan 2016 05:31:17 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:45015) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aPpFt-0007Ok-TK for qemu-devel@nongnu.org; Sun, 31 Jan 2016 05:29:18 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aPpFs-0007mB-Cs for qemu-devel@nongnu.org; Sun, 31 Jan 2016 05:29:17 -0500 Received: from mail-wm0-x242.google.com ([2a00:1450:400c:c09::242]:33563) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aPpFs-0007lx-3L for qemu-devel@nongnu.org; Sun, 31 Jan 2016 05:29:16 -0500 Received: by mail-wm0-x242.google.com with SMTP id r129so4665312wmr.0 for ; Sun, 31 Jan 2016 02:29:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=5LWJXhWI+P2nOu1oKUXTMd3Lp+Fpc1Z5ywNQBiymMns=; b=BPpWXq5AdgGsrhHivQKBxxzDXIOZxr5as9cn839VgQ8d65HInNUIQ/lwFyKPcnB2oc lcKaJr9tW8Xy8XmWmgiWBOSbXR6xSqQ52JerEGuq2RsaqXUPTQshQkfbaah2bOWsi2Os ZSpuVCbe7sIFMTDcOOmEI81VyG1Kwtu4dctoBw4sOsPwCTtnAwqGA1143h7N8z0N+W7n d218U6sb9lh8wmEBPZjZSuyRbM7IuZ5X8AQxDi6p/sNtq4tDmVeX/vdhjZAqtGKB5oIn f2lJsVHwLDLE7qM3Ng5+/KrSSZHTm9cigAZMPyMHH9WpUB0jAWMikyByu/jpKDlSEVGa LbNA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=5LWJXhWI+P2nOu1oKUXTMd3Lp+Fpc1Z5ywNQBiymMns=; b=nAtl3yJ8JSpknsUMh+DHH2rlzCBVXxrYD3DfMsW6lcV2aRJEbZ+98HWJRoJiZPzy2H /0zM29fADEvJH0C39V+xeRtZWE/DOJM9IQlcPV3iqeqlecX1MXQiHCozoOLJOv5aEQgm rpxvgA47hu3sQcp/YY+b+FutmJP2W9/7Ihll7eHsJ/7BL8iQenkmyBmm6Gmd9Pm7dNHC w9vCIPiUo01csVv/5yB7NLV9lNh6durK+evCih9OxuetX9xsJFjforJzLpG0fwRyPS2k 5vL03cnR9ICagxPI0C87TJs1pKfnEOknprXLt+fMwkziy6+IV3RkH0L1TDjm+VSFd/LO IMNA== X-Gm-Message-State: AG10YOSVc67QPG2M7L/NsNkc64Pfyohl1sUcRT7IeF3q4aiP1CbBt2jPdzkWtybgcB6wHA== X-Received: by 10.194.176.170 with SMTP id cj10mr17390189wjc.165.1454236155581; Sun, 31 Jan 2016 02:29:15 -0800 (PST) Received: from donizetti.fosdem.net ([2001:67c:1810:f051:7105:9465:633c:859]) by smtp.gmail.com with ESMTPSA id i79sm2349430wmf.24.2016.01.31.02.29.14 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 31 Jan 2016 02:29:14 -0800 (PST) From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Sun, 31 Jan 2016 11:29:01 +0100 Message-Id: <1454236146-23293-6-git-send-email-pbonzini@redhat.com> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1454236146-23293-1-git-send-email-pbonzini@redhat.com> References: <1454236146-23293-1-git-send-email-pbonzini@redhat.com> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:400c:c09::242 Cc: cornelia.huck@de.ibm.com, mst@redhat.com Subject: [Qemu-devel] [PATCH 05/10] virtio: slim down allocation of VirtQueueElements X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Build the addresses and s/g lists on the stack, and then copy them to a VirtQueueElement that is just as big as required to contain this particular s/g list. The cost of the copy is minimal compared to that of a large malloc. When virtqueue_map is used on the destination side of migration or on loadvm, the iovecs have already been split at memory region boundary, so we can just reuse the out_num/in_num we find in the file. Reviewed-by: Cornelia Huck Signed-off-by: Paolo Bonzini --- v1->v2: change bools from 1 and 0 to "true" and "false" [Conny] hw/virtio/virtio.c | 82 +++++++++++++++++++++++++++++++++--------------------- 1 file changed, 51 insertions(+), 31 deletions(-) diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c index f49c5ae..79a635f 100644 --- a/hw/virtio/virtio.c +++ b/hw/virtio/virtio.c @@ -448,6 +448,32 @@ int virtqueue_avail_bytes(VirtQueue *vq, unsigned int in_bytes, return in_bytes <= in_total && out_bytes <= out_total; } +static void virtqueue_map_desc(unsigned int *p_num_sg, hwaddr *addr, struct iovec *iov, + unsigned int max_num_sg, bool is_write, + hwaddr pa, size_t sz) +{ + unsigned num_sg = *p_num_sg; + assert(num_sg <= max_num_sg); + + while (sz) { + hwaddr len = sz; + + if (num_sg == max_num_sg) { + error_report("virtio: too many write descriptors in indirect table"); + exit(1); + } + + iov[num_sg].iov_base = cpu_physical_memory_map(pa, &len, is_write); + iov[num_sg].iov_len = len; + addr[num_sg] = pa; + + sz -= len; + pa += len; + num_sg++; + } + *p_num_sg = num_sg; +} + static void virtqueue_map_iovec(struct iovec *sg, hwaddr *addr, unsigned int *num_sg, unsigned int max_size, int is_write) @@ -474,20 +500,10 @@ static void virtqueue_map_iovec(struct iovec *sg, hwaddr *addr, error_report("virtio: error trying to map MMIO memory"); exit(1); } - if (len == sg[i].iov_len) { - continue; - } - if (*num_sg >= max_size) { - error_report("virtio: memory split makes iovec too large"); + if (len != sg[i].iov_len) { + error_report("virtio: unexpected memory split"); exit(1); } - memmove(sg + i + 1, sg + i, sizeof(*sg) * (*num_sg - i)); - memmove(addr + i + 1, addr + i, sizeof(*addr) * (*num_sg - i)); - assert(len < sg[i + 1].iov_len); - sg[i].iov_len = len; - addr[i + 1] += len; - sg[i + 1].iov_len -= len; - ++*num_sg; } } @@ -526,14 +542,16 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz) hwaddr desc_pa = vq->vring.desc; VirtIODevice *vdev = vq->vdev; VirtQueueElement *elem; + unsigned out_num, in_num; + hwaddr addr[VIRTQUEUE_MAX_SIZE]; + struct iovec iov[VIRTQUEUE_MAX_SIZE]; if (!virtqueue_num_heads(vq, vq->last_avail_idx)) { return NULL; } /* When we start there are none of either input nor output. */ - elem = virtqueue_alloc_element(sz, VIRTQUEUE_MAX_SIZE, VIRTQUEUE_MAX_SIZE); - elem->out_num = elem->in_num = 0; + out_num = in_num = 0; max = vq->vring.num; @@ -556,37 +574,39 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz) /* Collect all the descriptors */ do { - struct iovec *sg; + hwaddr pa = vring_desc_addr(vdev, desc_pa, i); + size_t len = vring_desc_len(vdev, desc_pa, i); if (vring_desc_flags(vdev, desc_pa, i) & VRING_DESC_F_WRITE) { - if (elem->in_num >= VIRTQUEUE_MAX_SIZE) { - error_report("Too many write descriptors in indirect table"); - exit(1); - } - elem->in_addr[elem->in_num] = vring_desc_addr(vdev, desc_pa, i); - sg = &elem->in_sg[elem->in_num++]; + virtqueue_map_desc(&in_num, addr + out_num, iov + out_num, + VIRTQUEUE_MAX_SIZE - out_num, true, pa, len); } else { - if (elem->out_num >= VIRTQUEUE_MAX_SIZE) { - error_report("Too many read descriptors in indirect table"); + if (in_num) { + error_report("Incorrect order for descriptors"); exit(1); } - elem->out_addr[elem->out_num] = vring_desc_addr(vdev, desc_pa, i); - sg = &elem->out_sg[elem->out_num++]; + virtqueue_map_desc(&out_num, addr, iov, + VIRTQUEUE_MAX_SIZE, false, pa, len); } - sg->iov_len = vring_desc_len(vdev, desc_pa, i); - /* If we've got too many, that implies a descriptor loop. */ - if ((elem->in_num + elem->out_num) > max) { + if ((in_num + out_num) > max) { error_report("Looped descriptor"); exit(1); } } while ((i = virtqueue_next_desc(vdev, desc_pa, i, max)) != max); - /* Now map what we have collected */ - virtqueue_map(elem); - + /* Now copy what we have collected and mapped */ + elem = virtqueue_alloc_element(sz, out_num, in_num); elem->index = head; + for (i = 0; i < out_num; i++) { + elem->out_addr[i] = addr[i]; + elem->out_sg[i] = iov[i]; + } + for (i = 0; i < in_num; i++) { + elem->in_addr[i] = addr[out_num + i]; + elem->in_sg[i] = iov[out_num + i]; + } vq->inuse++;