Message ID | 1489449360-14411-6-git-send-email-sstabellini@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Mon, 13 Mar 2017 16:55:57 -0700 Stefano Stabellini <sstabellini@kernel.org> wrote: > Upon receiving an event channel notification from the frontend, schedule > the bottom half. From the bottom half, read one request from the ring, > create a pdu and call pdu_submit to handle it. > > For now, only handle one request per ring at a time. > > Signed-off-by: Stefano Stabellini <stefano@aporeto.com> > CC: anthony.perard@citrix.com > CC: jgross@suse.com > CC: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> > CC: Greg Kurz <groug@kaod.org> > --- > hw/9pfs/xen-9p-backend.c | 47 +++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 47 insertions(+) > > diff --git a/hw/9pfs/xen-9p-backend.c b/hw/9pfs/xen-9p-backend.c > index 0e4a133..741dd31 100644 > --- a/hw/9pfs/xen-9p-backend.c > +++ b/hw/9pfs/xen-9p-backend.c > @@ -94,12 +94,59 @@ static int xen_9pfs_init(struct XenDevice *xendev) > return 0; > } > > +static int xen_9pfs_receive(struct Xen9pfsRing *ring) Coding style for structured types. > +{ > + struct xen_9pfs_header h; > + RING_IDX cons, prod, masked_prod, masked_cons; > + V9fsPDU *pdu; > + > + if (ring->inprogress) { > + return 0; > + } > + > + cons = ring->intf->out_cons; > + prod = ring->intf->out_prod; > + xen_rmb(); > + > + if (xen_9pfs_queued(prod, cons, XEN_9PFS_RING_SIZE) < sizeof(h)) { > + return 0; > + } > + ring->inprogress = true; > + > + masked_prod = xen_9pfs_mask(prod, XEN_9PFS_RING_SIZE); > + masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE); > + > + xen_9pfs_read_packet(ring->ring.out, masked_prod, &masked_cons, > + XEN_9PFS_RING_SIZE, (uint8_t*) &h, sizeof(h)); > + > + pdu = pdu_alloc(&ring->priv->state); pdu_alloc() can theoretically return NULL. Maybe add a comment to indicate this won't happen because "for now [we] only handle one request per ring at a time". > + pdu->size = h.size; > + pdu->id = h.id; > + pdu->tag = h.tag; > + ring->out_size = h.size; > + ring->out_cons = cons + h.size; > + > + qemu_co_queue_init(&pdu->complete); > + pdu_submit(pdu); > + > + return 0; > +} > + > static void xen_9pfs_bh(void *opaque) > { > + struct Xen9pfsRing *ring = opaque; Coding style for structured types. > + xen_9pfs_receive(ring); > } > > static void xen_9pfs_evtchn_event(void *opaque) > { > + struct Xen9pfsRing *ring = opaque; Coding style for structured types. > + evtchn_port_t port; > + > + port = xenevtchn_pending(ring->evtchndev); > + xenevtchn_unmask(ring->evtchndev, port); > + > + qemu_bh_schedule(ring->bh); > } > > static int xen_9pfs_free(struct XenDevice *xendev)
On Mon, 13 Mar 2017 16:55:57 -0700 Stefano Stabellini <sstabellini@kernel.org> wrote: > Upon receiving an event channel notification from the frontend, schedule > the bottom half. From the bottom half, read one request from the ring, > create a pdu and call pdu_submit to handle it. > > For now, only handle one request per ring at a time. > > Signed-off-by: Stefano Stabellini <stefano@aporeto.com> > CC: anthony.perard@citrix.com > CC: jgross@suse.com > CC: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> > CC: Greg Kurz <groug@kaod.org> > --- Oops, one more remark I forgot in my the previous mail. See below. > hw/9pfs/xen-9p-backend.c | 47 +++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 47 insertions(+) > > diff --git a/hw/9pfs/xen-9p-backend.c b/hw/9pfs/xen-9p-backend.c > index 0e4a133..741dd31 100644 > --- a/hw/9pfs/xen-9p-backend.c > +++ b/hw/9pfs/xen-9p-backend.c > @@ -94,12 +94,59 @@ static int xen_9pfs_init(struct XenDevice *xendev) > return 0; > } > > +static int xen_9pfs_receive(struct Xen9pfsRing *ring) > +{ > + struct xen_9pfs_header h; > + RING_IDX cons, prod, masked_prod, masked_cons; > + V9fsPDU *pdu; > + > + if (ring->inprogress) { > + return 0; > + } > + > + cons = ring->intf->out_cons; > + prod = ring->intf->out_prod; > + xen_rmb(); > + > + if (xen_9pfs_queued(prod, cons, XEN_9PFS_RING_SIZE) < sizeof(h)) { > + return 0; > + } > + ring->inprogress = true; > + > + masked_prod = xen_9pfs_mask(prod, XEN_9PFS_RING_SIZE); > + masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE); > + > + xen_9pfs_read_packet(ring->ring.out, masked_prod, &masked_cons, > + XEN_9PFS_RING_SIZE, (uint8_t*) &h, sizeof(h)); > + > + pdu = pdu_alloc(&ring->priv->state); > + pdu->size = h.size; 9P uses little-endian, so this should be: pdu->size = le32_to_cpu(h.size); > + pdu->id = h.id; > + pdu->tag = h.tag; and: pdu->tag = le16_to_cpu(h.tag); > + ring->out_size = h.size; > + ring->out_cons = cons + h.size; Same here. > + > + qemu_co_queue_init(&pdu->complete); > + pdu_submit(pdu); > + > + return 0; > +} > + > static void xen_9pfs_bh(void *opaque) > { > + struct Xen9pfsRing *ring = opaque; > + xen_9pfs_receive(ring); > } > > static void xen_9pfs_evtchn_event(void *opaque) > { > + struct Xen9pfsRing *ring = opaque; > + evtchn_port_t port; > + > + port = xenevtchn_pending(ring->evtchndev); > + xenevtchn_unmask(ring->evtchndev, port); > + > + qemu_bh_schedule(ring->bh); > } > > static int xen_9pfs_free(struct XenDevice *xendev)
On Wed, 15 Mar 2017, Greg Kurz wrote: > On Mon, 13 Mar 2017 16:55:57 -0700 > Stefano Stabellini <sstabellini@kernel.org> wrote: > > > Upon receiving an event channel notification from the frontend, schedule > > the bottom half. From the bottom half, read one request from the ring, > > create a pdu and call pdu_submit to handle it. > > > > For now, only handle one request per ring at a time. > > > > Signed-off-by: Stefano Stabellini <stefano@aporeto.com> > > CC: anthony.perard@citrix.com > > CC: jgross@suse.com > > CC: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> > > CC: Greg Kurz <groug@kaod.org> > > --- > > hw/9pfs/xen-9p-backend.c | 47 +++++++++++++++++++++++++++++++++++++++++++++++ > > 1 file changed, 47 insertions(+) > > > > diff --git a/hw/9pfs/xen-9p-backend.c b/hw/9pfs/xen-9p-backend.c > > index 0e4a133..741dd31 100644 > > --- a/hw/9pfs/xen-9p-backend.c > > +++ b/hw/9pfs/xen-9p-backend.c > > @@ -94,12 +94,59 @@ static int xen_9pfs_init(struct XenDevice *xendev) > > return 0; > > } > > > > +static int xen_9pfs_receive(struct Xen9pfsRing *ring) > > Coding style for structured types. I'll fix > > +{ > > + struct xen_9pfs_header h; > > + RING_IDX cons, prod, masked_prod, masked_cons; > > + V9fsPDU *pdu; > > + > > + if (ring->inprogress) { > > + return 0; > > + } > > + > > + cons = ring->intf->out_cons; > > + prod = ring->intf->out_prod; > > + xen_rmb(); > > + > > + if (xen_9pfs_queued(prod, cons, XEN_9PFS_RING_SIZE) < sizeof(h)) { > > + return 0; > > + } > > + ring->inprogress = true; > > + > > + masked_prod = xen_9pfs_mask(prod, XEN_9PFS_RING_SIZE); > > + masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE); > > + > > + xen_9pfs_read_packet(ring->ring.out, masked_prod, &masked_cons, > > + XEN_9PFS_RING_SIZE, (uint8_t*) &h, sizeof(h)); > > + > > + pdu = pdu_alloc(&ring->priv->state); > > pdu_alloc() can theoretically return NULL. Maybe add a comment to > indicate this won't happen because "for now [we] only handle one > request per ring at a time". OK > > + pdu->size = h.size; > > + pdu->id = h.id; > > + pdu->tag = h.tag; > > + ring->out_size = h.size; > > + ring->out_cons = cons + h.size; > > + > > + qemu_co_queue_init(&pdu->complete); > > + pdu_submit(pdu); > > + > > + return 0; > > +} > > + > > static void xen_9pfs_bh(void *opaque) > > { > > + struct Xen9pfsRing *ring = opaque; > > Coding style for structured types. OK > > + xen_9pfs_receive(ring); > > } > > > > static void xen_9pfs_evtchn_event(void *opaque) > > { > > + struct Xen9pfsRing *ring = opaque; > > Coding style for structured types. OK > > + evtchn_port_t port; > > + > > + port = xenevtchn_pending(ring->evtchndev); > > + xenevtchn_unmask(ring->evtchndev, port); > > + > > + qemu_bh_schedule(ring->bh); > > } > > > > static int xen_9pfs_free(struct XenDevice *xendev)
On Wed, 15 Mar 2017, Greg Kurz wrote: > On Mon, 13 Mar 2017 16:55:57 -0700 > Stefano Stabellini <sstabellini@kernel.org> wrote: > > > Upon receiving an event channel notification from the frontend, schedule > > the bottom half. From the bottom half, read one request from the ring, > > create a pdu and call pdu_submit to handle it. > > > > For now, only handle one request per ring at a time. > > > > Signed-off-by: Stefano Stabellini <stefano@aporeto.com> > > CC: anthony.perard@citrix.com > > CC: jgross@suse.com > > CC: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> > > CC: Greg Kurz <groug@kaod.org> > > --- > > Oops, one more remark I forgot in my the previous mail. See below. > > > hw/9pfs/xen-9p-backend.c | 47 +++++++++++++++++++++++++++++++++++++++++++++++ > > 1 file changed, 47 insertions(+) > > > > diff --git a/hw/9pfs/xen-9p-backend.c b/hw/9pfs/xen-9p-backend.c > > index 0e4a133..741dd31 100644 > > --- a/hw/9pfs/xen-9p-backend.c > > +++ b/hw/9pfs/xen-9p-backend.c > > @@ -94,12 +94,59 @@ static int xen_9pfs_init(struct XenDevice *xendev) > > return 0; > > } > > > > +static int xen_9pfs_receive(struct Xen9pfsRing *ring) > > +{ > > + struct xen_9pfs_header h; > > + RING_IDX cons, prod, masked_prod, masked_cons; > > + V9fsPDU *pdu; > > + > > + if (ring->inprogress) { > > + return 0; > > + } > > + > > + cons = ring->intf->out_cons; > > + prod = ring->intf->out_prod; > > + xen_rmb(); > > + > > + if (xen_9pfs_queued(prod, cons, XEN_9PFS_RING_SIZE) < sizeof(h)) { > > + return 0; > > + } > > + ring->inprogress = true; > > + > > + masked_prod = xen_9pfs_mask(prod, XEN_9PFS_RING_SIZE); > > + masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE); > > + > > + xen_9pfs_read_packet(ring->ring.out, masked_prod, &masked_cons, > > + XEN_9PFS_RING_SIZE, (uint8_t*) &h, sizeof(h)); > > + > > + pdu = pdu_alloc(&ring->priv->state); > > + pdu->size = h.size; > > 9P uses little-endian, so this should be: > > pdu->size = le32_to_cpu(h.size); > > > + pdu->id = h.id; > > + pdu->tag = h.tag; > > and: > > pdu->tag = le16_to_cpu(h.tag); > > > + ring->out_size = h.size; > > + ring->out_cons = cons + h.size; > > Same here. OK, thanks. Xen doesn't support big endian today, but they don't hurt. > > + > > + qemu_co_queue_init(&pdu->complete); > > + pdu_submit(pdu); > > + > > + return 0; > > +} > > + > > static void xen_9pfs_bh(void *opaque) > > { > > + struct Xen9pfsRing *ring = opaque; > > + xen_9pfs_receive(ring); > > } > > > > static void xen_9pfs_evtchn_event(void *opaque) > > { > > + struct Xen9pfsRing *ring = opaque; > > + evtchn_port_t port; > > + > > + port = xenevtchn_pending(ring->evtchndev); > > + xenevtchn_unmask(ring->evtchndev, port); > > + > > + qemu_bh_schedule(ring->bh); > > } > > > > static int xen_9pfs_free(struct XenDevice *xendev) > >
diff --git a/hw/9pfs/xen-9p-backend.c b/hw/9pfs/xen-9p-backend.c index 0e4a133..741dd31 100644 --- a/hw/9pfs/xen-9p-backend.c +++ b/hw/9pfs/xen-9p-backend.c @@ -94,12 +94,59 @@ static int xen_9pfs_init(struct XenDevice *xendev) return 0; } +static int xen_9pfs_receive(struct Xen9pfsRing *ring) +{ + struct xen_9pfs_header h; + RING_IDX cons, prod, masked_prod, masked_cons; + V9fsPDU *pdu; + + if (ring->inprogress) { + return 0; + } + + cons = ring->intf->out_cons; + prod = ring->intf->out_prod; + xen_rmb(); + + if (xen_9pfs_queued(prod, cons, XEN_9PFS_RING_SIZE) < sizeof(h)) { + return 0; + } + ring->inprogress = true; + + masked_prod = xen_9pfs_mask(prod, XEN_9PFS_RING_SIZE); + masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE); + + xen_9pfs_read_packet(ring->ring.out, masked_prod, &masked_cons, + XEN_9PFS_RING_SIZE, (uint8_t*) &h, sizeof(h)); + + pdu = pdu_alloc(&ring->priv->state); + pdu->size = h.size; + pdu->id = h.id; + pdu->tag = h.tag; + ring->out_size = h.size; + ring->out_cons = cons + h.size; + + qemu_co_queue_init(&pdu->complete); + pdu_submit(pdu); + + return 0; +} + static void xen_9pfs_bh(void *opaque) { + struct Xen9pfsRing *ring = opaque; + xen_9pfs_receive(ring); } static void xen_9pfs_evtchn_event(void *opaque) { + struct Xen9pfsRing *ring = opaque; + evtchn_port_t port; + + port = xenevtchn_pending(ring->evtchndev); + xenevtchn_unmask(ring->evtchndev, port); + + qemu_bh_schedule(ring->bh); } static int xen_9pfs_free(struct XenDevice *xendev)
Upon receiving an event channel notification from the frontend, schedule the bottom half. From the bottom half, read one request from the ring, create a pdu and call pdu_submit to handle it. For now, only handle one request per ring at a time. Signed-off-by: Stefano Stabellini <stefano@aporeto.com> CC: anthony.perard@citrix.com CC: jgross@suse.com CC: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> CC: Greg Kurz <groug@kaod.org> --- hw/9pfs/xen-9p-backend.c | 47 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 47 insertions(+)