From patchwork Thu Jul 3 20:53:35 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suman Anna X-Patchwork-Id: 4477001 Return-Path: X-Original-To: patchwork-linux-omap@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 2ADDBBEEAA for ; Thu, 3 Jul 2014 20:54:04 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 2E45F202B4 for ; Thu, 3 Jul 2014 20:54:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2630220295 for ; Thu, 3 Jul 2014 20:54:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752161AbaGCUxs (ORCPT ); Thu, 3 Jul 2014 16:53:48 -0400 Received: from comal.ext.ti.com ([198.47.26.152]:46556 "EHLO comal.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751842AbaGCUxr (ORCPT ); Thu, 3 Jul 2014 16:53:47 -0400 Received: from dlelxv90.itg.ti.com ([172.17.2.17]) by comal.ext.ti.com (8.13.7/8.13.7) with ESMTP id s63Krh8a001269; Thu, 3 Jul 2014 15:53:44 -0500 Received: from DFLE73.ent.ti.com (dfle73.ent.ti.com [128.247.5.110]) by dlelxv90.itg.ti.com (8.14.3/8.13.8) with ESMTP id s63KrhV7004514; Thu, 3 Jul 2014 15:53:43 -0500 Received: from dflp33.itg.ti.com (10.64.6.16) by DFLE73.ent.ti.com (128.247.5.110) with Microsoft SMTP Server id 14.3.174.1; Thu, 3 Jul 2014 15:53:43 -0500 Received: from legion.dal.design.ti.com (legion.dal.design.ti.com [128.247.22.53]) by dflp33.itg.ti.com (8.14.3/8.13.8) with ESMTP id s63KrhGp014896; Thu, 3 Jul 2014 15:53:43 -0500 Received: from localhost (irmo.am.dhcp.ti.com [128.247.71.175]) by legion.dal.design.ti.com (8.11.7p1+Sun/8.11.7) with ESMTP id s63Krht25638; Thu, 3 Jul 2014 15:53:43 -0500 (CDT) From: Suman Anna To: Ohad Ben-Cohen CC: Rusty Russell , , , Suman Anna Subject: [PATCH] rpmsg: compute number of buffers to allocate from vrings Date: Thu, 3 Jul 2014 15:53:35 -0500 Message-ID: <1404420815-42108-1-git-send-email-s-anna@ti.com> X-Mailer: git-send-email 2.0.0 MIME-Version: 1.0 Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The buffers to be used for communication are allocated during the rpmsg virtio driver's probe, and the number of buffers is currently hard-coded to 512. Remove this hard-coded value, as this can vary from one platform to another or between different remote processors. Instead, rely on the number of buffers the virtqueue vring is setup with in the first place. This fixes the WARN_ON during the setup of the receive buffers for vrings with buffers less than 512. NOTE: The number of buffers is already assumed to be symmetrical in each direction, and that logic is not unchanged. Signed-off-by: Suman Anna --- drivers/rpmsg/virtio_rpmsg_bus.c | 41 +++++++++++++++++++++++++--------------- 1 file changed, 26 insertions(+), 15 deletions(-) diff --git a/drivers/rpmsg/virtio_rpmsg_bus.c b/drivers/rpmsg/virtio_rpmsg_bus.c index b6135d4..e9866a6 100644 --- a/drivers/rpmsg/virtio_rpmsg_bus.c +++ b/drivers/rpmsg/virtio_rpmsg_bus.c @@ -41,6 +41,7 @@ * @svq: tx virtqueue * @rbufs: kernel address of rx buffers * @sbufs: kernel address of tx buffers + * @num_bufs: total number of buffers for rx and tx * @last_sbuf: index of last tx buffer used * @bufs_dma: dma base addr of the buffers * @tx_lock: protects svq, sbufs and sleepers, to allow concurrent senders. @@ -60,6 +61,7 @@ struct virtproc_info { struct virtio_device *vdev; struct virtqueue *rvq, *svq; void *rbufs, *sbufs; + unsigned int num_bufs; int last_sbuf; dma_addr_t bufs_dma; struct mutex tx_lock; @@ -86,14 +88,14 @@ struct rpmsg_channel_info { #define to_rpmsg_driver(d) container_of(d, struct rpmsg_driver, drv) /* - * We're allocating 512 buffers of 512 bytes for communications, and then - * using the first 256 buffers for RX, and the last 256 buffers for TX. + * We're allocating buffers of 512 bytes each for communications. The + * number of buffers are computed from the number of buffers supported + * by the virtqueue vring and then use the first half of those buffers + * for RX, and the last half buffers for TX. * * Each buffer will have 16 bytes for the msg header and 496 bytes for * the payload. * - * This will require a total space of 256KB for the buffers. - * * We might also want to add support for user-provided buffers in time. * This will allow bigger buffer size flexibility, and can also be used * to achieve zero-copy messaging. @@ -102,9 +104,7 @@ struct rpmsg_channel_info { * can change this without changing anything in the firmware of the remote * processor. */ -#define RPMSG_NUM_BUFS (512) #define RPMSG_BUF_SIZE (512) -#define RPMSG_TOTAL_BUF_SPACE (RPMSG_NUM_BUFS * RPMSG_BUF_SIZE) /* * Local addresses are dynamically allocated on-demand. @@ -579,7 +579,7 @@ static void *get_a_tx_buf(struct virtproc_info *vrp) * either pick the next unused tx buffer * (half of our buffers are used for sending messages) */ - if (vrp->last_sbuf < RPMSG_NUM_BUFS / 2) + if (vrp->last_sbuf < vrp->num_bufs / 2) ret = vrp->sbufs + RPMSG_BUF_SIZE * vrp->last_sbuf++; /* or recycle a used one */ else @@ -948,6 +948,7 @@ static int rpmsg_probe(struct virtio_device *vdev) struct virtproc_info *vrp; void *bufs_va; int err = 0, i; + size_t total_buf_space; vrp = kzalloc(sizeof(*vrp), GFP_KERNEL); if (!vrp) @@ -968,10 +969,19 @@ static int rpmsg_probe(struct virtio_device *vdev) vrp->rvq = vqs[0]; vrp->svq = vqs[1]; + /* + * We expect equal number of buffers for each direction as per current + * code, so throw a warning if the configuration doesn't match. This can + * easily be adjusted if needed. + */ + vrp->num_bufs = virtqueue_get_vring_size(vrp->rvq) * 2; + WARN_ON(virtqueue_get_vring_size(vrp->svq) != (vrp->num_bufs / 2)); + total_buf_space = vrp->num_bufs * RPMSG_BUF_SIZE; + /* allocate coherent memory for the buffers */ bufs_va = dma_alloc_coherent(vdev->dev.parent->parent, - RPMSG_TOTAL_BUF_SPACE, - &vrp->bufs_dma, GFP_KERNEL); + total_buf_space, &vrp->bufs_dma, + GFP_KERNEL); if (!bufs_va) { err = -ENOMEM; goto vqs_del; @@ -984,10 +994,10 @@ static int rpmsg_probe(struct virtio_device *vdev) vrp->rbufs = bufs_va; /* and half is dedicated for TX */ - vrp->sbufs = bufs_va + RPMSG_TOTAL_BUF_SPACE / 2; + vrp->sbufs = bufs_va + total_buf_space / 2; /* set up the receive buffers */ - for (i = 0; i < RPMSG_NUM_BUFS / 2; i++) { + for (i = 0; i < vrp->num_bufs / 2; i++) { struct scatterlist sg; void *cpu_addr = vrp->rbufs + i * RPMSG_BUF_SIZE; @@ -1023,8 +1033,8 @@ static int rpmsg_probe(struct virtio_device *vdev) return 0; free_coherent: - dma_free_coherent(vdev->dev.parent->parent, RPMSG_TOTAL_BUF_SPACE, - bufs_va, vrp->bufs_dma); + dma_free_coherent(vdev->dev.parent->parent, total_buf_space, + bufs_va, vrp->bufs_dma); vqs_del: vdev->config->del_vqs(vrp->vdev); free_vrp: @@ -1042,6 +1052,7 @@ static int rpmsg_remove_device(struct device *dev, void *data) static void rpmsg_remove(struct virtio_device *vdev) { struct virtproc_info *vrp = vdev->priv; + size_t total_buf_space = vrp->num_bufs * RPMSG_BUF_SIZE; int ret; vdev->config->reset(vdev); @@ -1057,8 +1068,8 @@ static void rpmsg_remove(struct virtio_device *vdev) vdev->config->del_vqs(vrp->vdev); - dma_free_coherent(vdev->dev.parent->parent, RPMSG_TOTAL_BUF_SPACE, - vrp->rbufs, vrp->bufs_dma); + dma_free_coherent(vdev->dev.parent->parent, total_buf_space, + vrp->rbufs, vrp->bufs_dma); kfree(vrp); }