From patchwork Thu Feb 7 12:22:30 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 2110511 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id 7D70E3FDF1 for ; Thu, 7 Feb 2013 12:24:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757566Ab3BGMYU (ORCPT ); Thu, 7 Feb 2013 07:24:20 -0500 Received: from mail-ve0-f177.google.com ([209.85.128.177]:57562 "EHLO mail-ve0-f177.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758382Ab3BGMW6 (ORCPT ); Thu, 7 Feb 2013 07:22:58 -0500 Received: by mail-ve0-f177.google.com with SMTP id m1so2164865ves.22 for ; Thu, 07 Feb 2013 04:22:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:sender:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references; bh=b7xttFNqXIDdt+Qpz56+/nuWbJ8MBwUBt5hCtHykWVM=; b=A7FTOr+v5AD+F5qsyCAv8lnD5VomO9L8plVZ/XcYo7duL4XqoMdBEgPqpj9IIbYV8G rLNqCpwvcejSLaPeC8RsqlE5O32QMCjV4vAmtfiQmx2GdhspY4i9ekqzW92Ux2PSZR3K M6fuTnMD+P0lCXde6H0unM9ZKi9WELLQEwrlg0g0D9d9uzwobqyEuTUzp8It9UjFRvji hgqd0z5fOlQ7ys18ebiqd4vHn2+zGEQsO1GJEGoAUKywmCxnt3+3htybvGRrWGD4XxEw YY5TCBP8ixj/GT3QjxbJOXCK+hvbcwW9MYlwZI/F8csWNCn/osb5Cu1pW4k3fQjaRP7I fkQA== X-Received: by 10.52.89.106 with SMTP id bn10mr1122495vdb.68.1360239777886; Thu, 07 Feb 2013 04:22:57 -0800 (PST) Received: from yakj.usersys.redhat.com (93-34-179-137.ip50.fastwebnet.it. [93.34.179.137]) by mx.google.com with ESMTPS id sk8sm37937888vdb.13.2013.02.07.04.22.55 (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Thu, 07 Feb 2013 04:22:57 -0800 (PST) From: Paolo Bonzini To: linux-kernel@vger.kernel.org Cc: Wanlong Gao , asias@redhat.com, Rusty Russell , mst@redhat.com, kvm@vger.kernel.org, virtualization@lists.linux-foundation.org Subject: [RFC PATCH 6/8] virtio-net: unmark scatterlist ending after virtqueue_add_buf Date: Thu, 7 Feb 2013 13:22:30 +0100 Message-Id: <1360239752-2470-7-git-send-email-pbonzini@redhat.com> X-Mailer: git-send-email 1.8.1 In-Reply-To: <1360239752-2470-1-git-send-email-pbonzini@redhat.com> References: <1360239752-2470-1-git-send-email-pbonzini@redhat.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Prepare for when virtqueue_add_buf will use sg_next instead of ignoring ending markers. Note that for_each_sg (and thus virtqueue_add_buf) allows you to pass a "truncated" scatterlist that does not have a marker on the last item. We rely on this in add_recvbuf_mergeable. Signed-off-by: Paolo Bonzini --- This is the only part that survived of Rusty's ideas. :) drivers/net/virtio_net.c | 21 ++++++++++++++++----- 1 files changed, 16 insertions(+), 5 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 35c00c5..ce08b54 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -440,13 +440,17 @@ static int add_recvbuf_small(struct receive_queue *rq, gfp_t gfp) hdr = skb_vnet_hdr(skb); sg_set_buf(rq->sg, &hdr->hdr, sizeof hdr->hdr); - skb_to_sgvec(skb, rq->sg + 1, 0, skb->len); err = virtqueue_add_buf(rq->vq, rq->sg, 0, 2, skb, gfp); if (err < 0) dev_kfree_skb(skb); + /* + * An optimization: clear the end bit set by skb_to_sgvec, so + * we can simply re-use rq->sg[] next time. + */ + sg_unmark_end(rq->sg + 1); return err; } @@ -505,8 +509,7 @@ static int add_recvbuf_mergeable(struct receive_queue *rq, gfp_t gfp) if (!page) return -ENOMEM; - sg_init_one(rq->sg, page_address(page), PAGE_SIZE); - + sg_set_page(rq->sg, page, PAGE_SIZE, 0); err = virtqueue_add_buf(rq->vq, rq->sg, 0, 1, page, gfp); if (err < 0) give_pages(rq, page); @@ -671,6 +674,7 @@ static int xmit_skb(struct send_queue *sq, struct sk_buff *skb) const unsigned char *dest = ((struct ethhdr *)skb->data)->h_dest; struct virtnet_info *vi = sq->vq->vdev->priv; unsigned num_sg; + int ret; pr_debug("%s: xmit %p %pM\n", vi->dev->name, skb, dest); @@ -710,8 +714,15 @@ static int xmit_skb(struct send_queue *sq, struct sk_buff *skb) sg_set_buf(sq->sg, &hdr->hdr, sizeof hdr->hdr); num_sg = skb_to_sgvec(skb, sq->sg + 1, 0, skb->len) + 1; - return virtqueue_add_buf(sq->vq, sq->sg, num_sg, - 0, skb, GFP_ATOMIC); + ret = virtqueue_add_buf(sq->vq, sq->sg, num_sg, + 0, skb, GFP_ATOMIC); + + /* + * An optimization: clear the end bit set by skb_to_sgvec, so + * we can simply re-use sq->sg[] next time. + */ + sg_unmark_end(&sq->sg[num_sg-1]); + return ret; } static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)