From patchwork Tue Aug 17 17:27:23 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: jvrao X-Patchwork-Id: 119983 Received: from lists.sourceforge.net (lists.sourceforge.net [216.34.181.88]) by demeter.kernel.org (8.14.4/8.14.3) with ESMTP id o7HHJuwB010442 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Tue, 17 Aug 2010 17:20:32 GMT Received: from localhost ([127.0.0.1] helo=sfs-ml-1.v29.ch3.sourceforge.com) by sfs-ml-1.v29.ch3.sourceforge.com with esmtp (Exim 4.69) (envelope-from ) id 1OlPpI-0007Dr-24; Tue, 17 Aug 2010 17:19:52 +0000 Received: from sog-mx-3.v43.ch3.sourceforge.com ([172.29.43.193] helo=mx.sourceforge.net) by sfs-ml-1.v29.ch3.sourceforge.com with esmtp (Exim 4.69) (envelope-from ) id 1OlPpH-0007Dj-In for v9fs-developer@lists.sourceforge.net; Tue, 17 Aug 2010 17:19:51 +0000 X-ACL-Warn: Received: from e37.co.us.ibm.com ([32.97.110.158]) by sog-mx-3.v43.ch3.sourceforge.com with esmtps (TLSv1:AES256-SHA:256) (Exim 4.69) id 1OlPpG-0004Nh-Ni for v9fs-developer@lists.sourceforge.net; Tue, 17 Aug 2010 17:19:51 +0000 Received: from d03relay01.boulder.ibm.com (d03relay01.boulder.ibm.com [9.17.195.226]) by e37.co.us.ibm.com (8.14.4/8.13.1) with ESMTP id o7HHHmph027550 for ; Tue, 17 Aug 2010 11:17:48 -0600 Received: from d03av02.boulder.ibm.com (d03av02.boulder.ibm.com [9.17.195.168]) by d03relay01.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id o7HHJb9n155520 for ; Tue, 17 Aug 2010 11:19:38 -0600 Received: from d03av02.boulder.ibm.com (loopback [127.0.0.1]) by d03av02.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id o7HHJaaT002298 for ; Tue, 17 Aug 2010 11:19:37 -0600 Received: from localhost.localdomain (elm9m80.beaverton.ibm.com [9.47.81.80]) by d03av02.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id o7HHJWRw001997; Tue, 17 Aug 2010 11:19:35 -0600 From: "Venkateswararao Jujjuri (JV)" To: v9fs-developer@lists.sourceforge.net Date: Tue, 17 Aug 2010 10:27:23 -0700 Message-Id: <1282066045-3945-4-git-send-email-jvrao@linux.vnet.ibm.com> X-Mailer: git-send-email 1.6.0.6 In-Reply-To: <1282066045-3945-1-git-send-email-jvrao@linux.vnet.ibm.com> References: <1282066045-3945-1-git-send-email-jvrao@linux.vnet.ibm.com> X-Spam-Score: -0.0 (/) X-Spam-Report: Spam Filtering performed by mx.sourceforge.net. See http://spamassassin.org/tag/ for more details. -0.0 T_RP_MATCHES_RCVD Envelope sender domain matches handover relay domain X-Headers-End: 1OlPpG-0004Nh-Ni Cc: linux-fsdevel@vger.kernel.org, Badari Pulavarty Subject: [V9fs-developer] [PATCH 3/5] [net/9p] Add support for placing page addresses directly on the sg list. X-BeenThere: v9fs-developer@lists.sourceforge.net X-Mailman-Version: 2.1.9 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: v9fs-developer-bounces@lists.sourceforge.net X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Tue, 17 Aug 2010 17:20:32 +0000 (UTC) diff --git a/include/net/9p/9p.h b/include/net/9p/9p.h index a8de812..382ef22 100644 --- a/include/net/9p/9p.h +++ b/include/net/9p/9p.h @@ -651,7 +651,11 @@ struct p9_fcall { size_t offset; size_t capacity; - + struct page **pdata; + uint32_t pdata_mapped_pages; + uint32_t pdata_off; + uint32_t pdata_write_len; + uint32_t pdata_read_len; uint8_t *sdata; }; diff --git a/net/9p/client.c b/net/9p/client.c index 29bbbbd..5487896 100644 --- a/net/9p/client.c +++ b/net/9p/client.c @@ -244,8 +244,12 @@ static struct p9_req_t *p9_tag_alloc(struct p9_client *c, u16 tag) } req->tc->sdata = (char *) req->tc + sizeof(struct p9_fcall); req->tc->capacity = c->msize; + req->tc->pdata_write_len = 0; + req->tc->pdata_read_len = 0; req->rc->sdata = (char *) req->rc + sizeof(struct p9_fcall); req->rc->capacity = c->msize; + req->rc->pdata_write_len = 0; + req->rc->pdata_read_len = 0; } p9pdu_reset(req->tc); diff --git a/net/9p/trans_virtio.c b/net/9p/trans_virtio.c index 762c19f..8f86cb5 100644 --- a/net/9p/trans_virtio.c +++ b/net/9p/trans_virtio.c @@ -180,6 +180,44 @@ pack_sg_list(struct scatterlist *sg, int start, int limit, char *data, return index-start; } +/** + * pack_sg_list_p - Pack a scatter gather list from an array of pages. + * @sg: scatter/gather list to pack into + * @start: which segment of the sg_list to start at + * @limit: maximum segment to pack data to + * @pdu: pdu prepared to put on the wire. + * @count: amount of data to pack into the scatter/gather list + * + * This is just like pack_sg_list() except that it takes page array + * as an input and directly places them on the sg list after taking + * care of the first page offset. + */ + +static int +pack_sg_list_p(struct scatterlist *sg, int start, int limit, + struct p9_fcall *pdu, int count) +{ + int s; + int i = 0; + int index = start; + + if (pdu->pdata_off) { + s = min((int)(PAGE_SIZE - pdu->pdata_off), count); + sg_set_page(&sg[index++], pdu->pdata[i++], s, pdu->pdata_off); + count -= s; + } + + while (count) { + BUG_ON(index > limit); + s = min((int)PAGE_SIZE, count); + sg_set_page(&sg[index++], pdu->pdata[i++], s, 0); + count -= s; + } + + return index-start; +} + + /* We don't currently allow canceling of virtio requests */ static int p9_virtio_cancel(struct p9_client *client, struct p9_req_t *req) { @@ -196,16 +234,31 @@ static int p9_virtio_cancel(struct p9_client *client, struct p9_req_t *req) static int p9_virtio_request(struct p9_client *client, struct p9_req_t *req) { - int in, out; + int in, out, outp, inp; struct virtio_chan *chan = client->trans; char *rdata = (char *)req->rc+sizeof(struct p9_fcall); P9_DPRINTK(P9_DEBUG_TRANS, "9p debug: virtio request\n"); out = pack_sg_list(chan->sg, 0, VIRTQUEUE_NUM, req->tc->sdata, - req->tc->size); - in = pack_sg_list(chan->sg, out, VIRTQUEUE_NUM-out, rdata, + req->tc->size); + + BUG_ON(req->tc->pdata_write_len && req->tc->pdata_read_len); + + if (req->tc->pdata_write_len) { + outp = pack_sg_list_p(chan->sg, out, VIRTQUEUE_NUM, + req->tc, req->tc->pdata_write_len); + out += outp; + } + if (req->tc->pdata_read_len) { + inp = pack_sg_list(chan->sg, out, VIRTQUEUE_NUM, rdata, 11); + in = pack_sg_list_p(chan->sg, out+inp, VIRTQUEUE_NUM, + req->tc, req->tc->pdata_read_len); + in += inp; + } else { + in = pack_sg_list(chan->sg, out, VIRTQUEUE_NUM, rdata, client->msize); + } req->status = REQ_STATUS_SENT;