From patchwork Tue Mar 31 19:28:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Eugenio Perez Martin X-Patchwork-Id: 11468313 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 519CE15AB for ; Tue, 31 Mar 2020 19:29:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1FBE921582 for ; Tue, 31 Mar 2020 19:29:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="baZCKiyJ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731343AbgCaT24 (ORCPT ); Tue, 31 Mar 2020 15:28:56 -0400 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:30162 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1730548AbgCaT21 (ORCPT ); Tue, 31 Mar 2020 15:28:27 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1585682906; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0GUPuc0LB1+DwbqXNlKtNDBywCBrn0kzY5vXvoKJhT0=; b=baZCKiyJGTOfegPkP9nnj/qM0OAeLD4ITJrjFXrgrv9347aWuA1J/4lYxIJawlPdhS/dyF le1b4QPp7fKaDxqgZXvIyZI9G9tuYGxPy187KBQkuUM0jWbDUD0VeibpPNclXIfGuNSCGs yZvwpffhmQxSHv6GW0zSHN3BEkgJ2G4= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-365-zdMphn8ePVq7hL0SqzGSUg-1; Tue, 31 Mar 2020 15:28:25 -0400 X-MC-Unique: zdMphn8ePVq7hL0SqzGSUg-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 7726518A6ECF; Tue, 31 Mar 2020 19:28:23 +0000 (UTC) Received: from eperezma.remote.csb (ovpn-112-92.ams2.redhat.com [10.36.112.92]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3368A98A52; Tue, 31 Mar 2020 19:28:21 +0000 (UTC) From: =?utf-8?q?Eugenio_P=C3=A9rez?= To: "Michael S. Tsirkin" Cc: "linux-kernel@vger.kernel.org" , Stephen Rothwell , kvm list , Linux Next Mailing List , "virtualization@lists.linux-foundation.org" , =?utf-8?q?Eugenio_P=C3=A9rez?= , Christian Borntraeger , Halil Pasic , Cornelia Huck Subject: [PATCH v3 4/8] vhost: batching fetches Date: Tue, 31 Mar 2020 21:28:00 +0200 Message-Id: <20200331192804.6019-5-eperezma@redhat.com> In-Reply-To: <20200331192804.6019-1-eperezma@redhat.com> References: <20200331192804.6019-1-eperezma@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: "Michael S. Tsirkin" With this patch applied, new and old code perform identically. Lots of extra optimizations are now possible, e.g. we can fetch multiple heads with copy_from/to_user now. We can get rid of maintaining the log array. Etc etc. Signed-off-by: Michael S. Tsirkin Signed-off-by: Eugenio PĂ©rez --- drivers/vhost/test.c | 2 +- drivers/vhost/vhost.c | 47 ++++++++++++++++++++++++++++++++++++++----- drivers/vhost/vhost.h | 5 ++++- 3 files changed, 47 insertions(+), 7 deletions(-) diff --git a/drivers/vhost/test.c b/drivers/vhost/test.c index 394e2e5c772d..4b00cd4266ad 100644 --- a/drivers/vhost/test.c +++ b/drivers/vhost/test.c @@ -119,7 +119,7 @@ static int vhost_test_open(struct inode *inode, struct file *f) dev = &n->dev; vqs[VHOST_TEST_VQ] = &n->vqs[VHOST_TEST_VQ]; n->vqs[VHOST_TEST_VQ].handle_kick = handle_vq_kick; - vhost_dev_init(dev, vqs, VHOST_TEST_VQ_MAX, UIO_MAXIOV, + vhost_dev_init(dev, vqs, VHOST_TEST_VQ_MAX, UIO_MAXIOV + 64, VHOST_TEST_PKT_WEIGHT, VHOST_TEST_WEIGHT); f->private_data = n; diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index 56c5253056ee..1646b1ce312a 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -303,6 +303,7 @@ static void vhost_vq_reset(struct vhost_dev *dev, { vq->num = 1; vq->ndescs = 0; + vq->first_desc = 0; vq->desc = NULL; vq->avail = NULL; vq->used = NULL; @@ -371,6 +372,11 @@ static int vhost_worker(void *data) return 0; } +static int vhost_vq_num_batch_descs(struct vhost_virtqueue *vq) +{ + return vq->max_descs - UIO_MAXIOV; +} + static void vhost_vq_free_iovecs(struct vhost_virtqueue *vq) { kfree(vq->descs); @@ -393,6 +399,9 @@ static long vhost_dev_alloc_iovecs(struct vhost_dev *dev) for (i = 0; i < dev->nvqs; ++i) { vq = dev->vqs[i]; vq->max_descs = dev->iov_limit; + if (vhost_vq_num_batch_descs(vq) < 0) { + return -EINVAL; + } vq->descs = kmalloc_array(vq->max_descs, sizeof(*vq->descs), GFP_KERNEL); @@ -1643,6 +1652,7 @@ long vhost_vring_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *arg vq->last_avail_idx = s.num; /* Forget the cached index value. */ vq->avail_idx = vq->last_avail_idx; + vq->ndescs = vq->first_desc = 0; break; case VHOST_GET_VRING_BASE: s.index = idx; @@ -2211,7 +2221,7 @@ static int fetch_indirect_descs(struct vhost_virtqueue *vq, return 0; } -static int fetch_descs(struct vhost_virtqueue *vq) +static int fetch_buf(struct vhost_virtqueue *vq) { unsigned int i, head, found = 0; struct vhost_desc *last; @@ -2224,7 +2234,11 @@ static int fetch_descs(struct vhost_virtqueue *vq) /* Check it isn't doing very strange things with descriptor numbers. */ last_avail_idx = vq->last_avail_idx; - if (vq->avail_idx == vq->last_avail_idx) { + if (unlikely(vq->avail_idx == vq->last_avail_idx)) { + /* If we already have work to do, don't bother re-checking. */ + if (likely(vq->ndescs)) + return vq->num; + if (unlikely(vhost_get_avail_idx(vq, &avail_idx))) { vq_err(vq, "Failed to access avail idx at %p\n", &vq->avail->idx); @@ -2315,6 +2329,24 @@ static int fetch_descs(struct vhost_virtqueue *vq) return 0; } +static int fetch_descs(struct vhost_virtqueue *vq) +{ + int ret = 0; + + if (unlikely(vq->first_desc >= vq->ndescs)) { + vq->first_desc = 0; + vq->ndescs = 0; + } + + if (vq->ndescs) + return 0; + + while (!ret && vq->ndescs <= vhost_vq_num_batch_descs(vq)) + ret = fetch_buf(vq); + + return vq->ndescs ? 0 : ret; +} + /* This looks in the virtqueue and for the first available buffer, and converts * it to an iovec for convenient access. Since descriptors consist of some * number of output then some number of input descriptors, it's actually two @@ -2340,7 +2372,7 @@ int vhost_get_vq_desc(struct vhost_virtqueue *vq, if (unlikely(log)) *log_num = 0; - for (i = 0; i < vq->ndescs; ++i) { + for (i = vq->first_desc; i < vq->ndescs; ++i) { unsigned iov_count = *in_num + *out_num; struct vhost_desc *desc = &vq->descs[i]; int access; @@ -2386,14 +2418,19 @@ int vhost_get_vq_desc(struct vhost_virtqueue *vq, } ret = desc->id; + + if (!(desc->flags & VRING_DESC_F_NEXT)) + break; } - vq->ndescs = 0; + vq->first_desc = i + 1; return ret; err: - vhost_discard_vq_desc(vq, 1); + for (i = vq->first_desc; i < vq->ndescs; ++i) + if (!(vq->descs[i].flags & VRING_DESC_F_NEXT)) + vhost_discard_vq_desc(vq, 1); vq->ndescs = 0; return ret; diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h index 1dbb5e44fba4..e1caca605c56 100644 --- a/drivers/vhost/vhost.h +++ b/drivers/vhost/vhost.h @@ -100,6 +100,7 @@ struct vhost_virtqueue { struct vhost_desc *descs; int ndescs; + int first_desc; int max_descs; const struct vhost_umem_node *meta_iotlb[VHOST_NUM_ADDRS]; @@ -242,7 +243,7 @@ ssize_t vhost_chr_write_iter(struct vhost_dev *dev, int vhost_init_device_iotlb(struct vhost_dev *d, bool enabled); #define vq_err(vq, fmt, ...) do { \ - pr_debug(pr_fmt(fmt), ##__VA_ARGS__); \ + pr_err(pr_fmt(fmt), ##__VA_ARGS__); \ if ((vq)->error_ctx) \ eventfd_signal((vq)->error_ctx, 1);\ } while (0) @@ -268,6 +269,8 @@ static inline void vhost_vq_set_backend(struct vhost_virtqueue *vq, void *private_data) { vq->private_data = private_data; + vq->ndescs = 0; + vq->first_desc = 0; } /**