From patchwork Wed Jun 8 23:46:06 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amos Kong X-Patchwork-Id: 862882 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.4) with ESMTP id p58NkN9S014119 for ; Wed, 8 Jun 2011 23:46:24 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756657Ab1FHXqO (ORCPT ); Wed, 8 Jun 2011 19:46:14 -0400 Received: from mx1.redhat.com ([209.132.183.28]:27201 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754763Ab1FHXqN (ORCPT ); Wed, 8 Jun 2011 19:46:13 -0400 Received: from int-mx02.intmail.prod.int.phx2.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id p58NkCOr014021 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Wed, 8 Jun 2011 19:46:12 -0400 Received: from localhost6.localdomain6 (vpn1-112-194.nay.redhat.com [10.66.112.194]) by int-mx02.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id p58Nk97c018960; Wed, 8 Jun 2011 19:46:10 -0400 Subject: [PATCH] tun: do not put self in waitq if doing a nonblock read To: netdev@vger.kernel.org From: Amos Kong Cc: jasowang@redhat.com, davem@davemloft.net, kvm@vger.kernel.org, mst@redhat.com Date: Thu, 09 Jun 2011 07:46:06 +0800 Message-ID: <20110608234606.8681.19932.stgit@localhost6.localdomain6> User-Agent: StGit/0.15 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Wed, 08 Jun 2011 23:46:24 +0000 (UTC) Perf shows a relatively high rate (about 8%) race in spin_lock_irqsave() when doing netperf between external host and guest. It's mainly becuase the lock contention between the tun_do_read() and tun_xmit_skb(), so this patch do not put self into waitqueue to reduce this kind of race. After this patch, it drops to 4%. Signed-off-by: Jason Wang Signed-off-by: Amos Kong --- drivers/net/tun.c | 6 ++++-- 1 files changed, 4 insertions(+), 2 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/net/tun.c b/drivers/net/tun.c index 74e9405..95dbff4 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -817,7 +817,8 @@ static ssize_t tun_do_read(struct tun_struct *tun, tun_debug(KERN_INFO, tun, "tun_chr_read\n"); - add_wait_queue(&tun->wq.wait, &wait); + if (unlikely(!noblock)) + add_wait_queue(&tun->wq.wait, &wait); while (len) { current->state = TASK_INTERRUPTIBLE; @@ -848,7 +849,8 @@ static ssize_t tun_do_read(struct tun_struct *tun, } current->state = TASK_RUNNING; - remove_wait_queue(&tun->wq.wait, &wait); + if (unlikely(!noblock)) + remove_wait_queue(&tun->wq.wait, &wait); return ret; }