Message ID | 1314646569-16738-1-git-send-email-ohad@wizery.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Mon, Aug 29, 2011 at 10:36 PM, Ohad Ben-Cohen <ohad@wizery.com> wrote: > From: Laurent Pinchart <laurent.pinchart@ideasonboard.com> > > omap_iovmm requires page-aligned buffers, and that sometimes causes > omap3isp failures (i.e. whenever the buffer passed from userspace is not > page-aligned). > > Remove this limitation by rounding the address of the first page entry > down, and adding the offset back to the device address. I'm having second thoughts about this. Obviously it works for omap3isp and its users because the buffer gets mapped and everyone is happy. But I'm not sure this is a valid IOMMU interface that the kernel should have, because effectively we're now mapping physical memory which nobody asked us to, and which might contain sensitive stuff we don't want to give the device (e.g. a remote processor which might be running rogue code) access to. Thoughts ? > > Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> > Acked-by: Hiroshi DOYU <Hiroshi.DOYU@nokia.com> > [ohad@wizery.com: slightly edited the commit log] > Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com> > --- > A fix by Laurent that was waiting until omap's iommu lands in drivers/. > > Generally we're not extending omap-iovmm anymore, but this fixes a real > breakage for omap3isp users, so there's no point in delaying it until > the generic DMA API is ready and we've migrated. > > Thanks Laurent! > > drivers/iommu/omap-iovmm.c | 32 ++++++++++++++++++++++++-------- > 1 files changed, 24 insertions(+), 8 deletions(-) > > diff --git a/drivers/iommu/omap-iovmm.c b/drivers/iommu/omap-iovmm.c > index 5e7f97d..d28a256 100644 > --- a/drivers/iommu/omap-iovmm.c > +++ b/drivers/iommu/omap-iovmm.c > @@ -27,6 +27,15 @@ > > static struct kmem_cache *iovm_area_cachep; > > +/* return the offset of the first scatterlist entry in a sg table */ > +static unsigned int sgtable_offset(const struct sg_table *sgt) > +{ > + if (!sgt || !sgt->nents) > + return 0; > + > + return sgt->sgl->offset; > +} > + > /* return total bytes of sg buffers */ > static size_t sgtable_len(const struct sg_table *sgt) > { > @@ -39,11 +48,17 @@ static size_t sgtable_len(const struct sg_table *sgt) > for_each_sg(sgt->sgl, sg, sgt->nents, i) { > size_t bytes; > > - bytes = sg->length; > + bytes = sg->length + sg->offset; > > if (!iopgsz_ok(bytes)) { > - pr_err("%s: sg[%d] not iommu pagesize(%x)\n", > - __func__, i, bytes); > + pr_err("%s: sg[%d] not iommu pagesize(%u %u)\n", > + __func__, i, bytes, sg->offset); > + return 0; > + } > + > + if (i && sg->offset) { > + pr_err("%s: sg[%d] offset not allowed in internal " > + "entries\n", __func__, i); > return 0; > } > > @@ -164,8 +179,8 @@ static void *vmap_sg(const struct sg_table *sgt) > u32 pa; > int err; > > - pa = sg_phys(sg); > - bytes = sg->length; > + pa = sg_phys(sg) - sg->offset; > + bytes = sg->length + sg->offset; > > BUG_ON(bytes != PAGE_SIZE); > > @@ -405,8 +420,8 @@ static int map_iovm_area(struct iommu_domain *domain, struct iovm_struct *new, > u32 pa; > size_t bytes; > > - pa = sg_phys(sg); > - bytes = sg->length; > + pa = sg_phys(sg) - sg->offset; > + bytes = sg->length + sg->offset; > > flags &= ~IOVMF_PGSZ_MASK; > > @@ -600,7 +615,7 @@ u32 omap_iommu_vmap(struct iommu_domain *domain, struct omap_iommu *obj, u32 da, > if (IS_ERR_VALUE(da)) > vunmap_sg(va); > > - return da; > + return da + sgtable_offset(sgt); > } > EXPORT_SYMBOL_GPL(omap_iommu_vmap); > > @@ -620,6 +635,7 @@ omap_iommu_vunmap(struct iommu_domain *domain, struct omap_iommu *obj, u32 da) > * 'sgt' is allocated before 'omap_iommu_vmalloc()' is called. > * Just returns 'sgt' to the caller to free > */ > + da &= PAGE_MASK; > sgt = unmap_vm_area(domain, obj, da, vunmap_sg, > IOVMF_DISCONT | IOVMF_MMIO); > if (!sgt) > -- > 1.7.4.1 > > -- To unsubscribe from this list: send the line "unsubscribe linux-omap" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Ohad, On Wednesday 31 August 2011 12:52:08 Ohad Ben-Cohen wrote: > On Mon, Aug 29, 2011 at 10:36 PM, Ohad Ben-Cohen <ohad@wizery.com> wrote: > > From: Laurent Pinchart <laurent.pinchart@ideasonboard.com> > > > > omap_iovmm requires page-aligned buffers, and that sometimes causes > > omap3isp failures (i.e. whenever the buffer passed from userspace is not > > page-aligned). > > > > Remove this limitation by rounding the address of the first page entry > > down, and adding the offset back to the device address. > > I'm having second thoughts about this. > > Obviously it works for omap3isp and its users because the buffer gets > mapped and everyone is happy. > > But I'm not sure this is a valid IOMMU interface that the kernel > should have, because effectively we're now mapping physical memory > which nobody asked us to, and which might contain sensitive stuff we > don't want to give the device (e.g. a remote processor which might be > running rogue code) access to. > > Thoughts ? That can indeed be an issue, depending on whether we consider the device as trusted or not. This decision can't obviously be taken by the IOMMU layer. Do you think it would be better to move this code to the OMAP3 ISP driver ?
On Wed, Aug 31, 2011 at 06:52:08AM -0400, Ohad Ben-Cohen wrote: > On Mon, Aug 29, 2011 at 10:36 PM, Ohad Ben-Cohen <ohad@wizery.com> wrote: > > From: Laurent Pinchart <laurent.pinchart@ideasonboard.com> > > > > omap_iovmm requires page-aligned buffers, and that sometimes causes > > omap3isp failures (i.e. whenever the buffer passed from userspace is not > > page-aligned). > > > > Remove this limitation by rounding the address of the first page entry > > down, and adding the offset back to the device address. > > I'm having second thoughts about this. > > Obviously it works for omap3isp and its users because the buffer gets > mapped and everyone is happy. > > But I'm not sure this is a valid IOMMU interface that the kernel > should have, because effectively we're now mapping physical memory > which nobody asked us to, and which might contain sensitive stuff we > don't want to give the device (e.g. a remote processor which might be > running rogue code) access to. > > Thoughts ? Do you mean the parts of the pages you map to the device that are not in the requested range (basically everything before offset and all after size)? This issue exists in other iommu drivers as well. It is inherent to how the dma-api is defined and how the iommu hardware works. The dma-api can work on byte granularity while the hardware usually only works on page granularity. Joerg
Hi Joerg, On Wednesday 31 August 2011 15:06:42 Roedel, Joerg wrote: > On Wed, Aug 31, 2011 at 06:52:08AM -0400, Ohad Ben-Cohen wrote: > > On Mon, Aug 29, 2011 at 10:36 PM, Ohad Ben-Cohen <ohad@wizery.com> wrote: > > > From: Laurent Pinchart <laurent.pinchart@ideasonboard.com> > > > > > > omap_iovmm requires page-aligned buffers, and that sometimes causes > > > omap3isp failures (i.e. whenever the buffer passed from userspace is > > > not page-aligned). > > > > > > Remove this limitation by rounding the address of the first page entry > > > down, and adding the offset back to the device address. > > > > I'm having second thoughts about this. > > > > Obviously it works for omap3isp and its users because the buffer gets > > mapped and everyone is happy. > > > > But I'm not sure this is a valid IOMMU interface that the kernel > > should have, because effectively we're now mapping physical memory > > which nobody asked us to, and which might contain sensitive stuff we > > don't want to give the device (e.g. a remote processor which might be > > running rogue code) access to. > > > > Thoughts ? > > Do you mean the parts of the pages you map to the device that are not in > the requested range (basically everything before offset and all after > size)? > This issue exists in other iommu drivers as well. It is inherent to how > the dma-api is defined and how the iommu hardware works. > The dma-api can work on byte granularity while the hardware usually only > works on page granularity. True, but if we implement address rounding transparently in the IOMMU layer Ohad's concern can be valid, depending on whether the device is trusted. If we decide to push address rounding to the drivers that decision can be made on a per-device basis. However, drivers are usually not aware of what granularity the IOMMU works on, so that wouldn't be straightforward and clean.
Hi Laurent, Joerg, On Wed, Aug 31, 2011 at 7:56 PM, Laurent Pinchart <laurent.pinchart@ideasonboard.com> wrote: > On Wednesday 31 August 2011 15:06:42 Roedel, Joerg wrote: >> Do you mean the parts of the pages you map to the device that are not in >> the requested range (basically everything before offset and all after >> size)? >> This issue exists in other iommu drivers as well. It is inherent to how >> the dma-api is defined and how the iommu hardware works. >> The dma-api can work on byte granularity while the hardware usually only >> works on page granularity. > > True, but if we implement address rounding transparently in the IOMMU layer > Ohad's concern can be valid, depending on whether the device is trusted. If we > decide to push address rounding to the drivers that decision can be made on a > per-device basis. However, drivers are usually not aware of what granularity > the IOMMU works on, so that wouldn't be straightforward and clean. I think the confusion lies in the fact that omap's iovmm should be treated as the DMA-API (which will soon replace it altogether anyway) and not as an IOMMU driver, and in that sense, Laurent's patch does sound reasonable. However, one needs to keep this in mind (e.g. map only page-aligned buffers ?) when using the upcoming iommu-based DMA-API with untrusted devices (such as remote processors running arbitrary code). It is completely wrong to allow these devices access to random physical memory regions. We might want to come up with a way to prevent this from happening... Thanks, Ohad. -- To unsubscribe from this list: send the line "unsubscribe linux-omap" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, Aug 31, 2011 at 12:56:26PM -0400, Laurent Pinchart wrote: > True, but if we implement address rounding transparently in the IOMMU layer > Ohad's concern can be valid, depending on whether the device is trusted. If we > decide to push address rounding to the drivers that decision can be made on a > per-device basis. However, drivers are usually not aware of what granularity > the IOMMU works on, so that wouldn't be straightforward and clean. Drivers usually just use the DMA-API, so they can not assume at all that any protection happens. The problem also can't be really solved without changing/breaking the DMA-API. It is defined to work on byte granularity while the hardware only works with pages. The only way to work around this it to implement the driver in a way so that they only map page-aligned buffers where the size is also a multiple of a page-size. In this case you can assume the page-size you are working on because you already assume in the driver that an iommu is in use. Joerg
diff --git a/drivers/iommu/omap-iovmm.c b/drivers/iommu/omap-iovmm.c index 5e7f97d..d28a256 100644 --- a/drivers/iommu/omap-iovmm.c +++ b/drivers/iommu/omap-iovmm.c @@ -27,6 +27,15 @@ static struct kmem_cache *iovm_area_cachep; +/* return the offset of the first scatterlist entry in a sg table */ +static unsigned int sgtable_offset(const struct sg_table *sgt) +{ + if (!sgt || !sgt->nents) + return 0; + + return sgt->sgl->offset; +} + /* return total bytes of sg buffers */ static size_t sgtable_len(const struct sg_table *sgt) { @@ -39,11 +48,17 @@ static size_t sgtable_len(const struct sg_table *sgt) for_each_sg(sgt->sgl, sg, sgt->nents, i) { size_t bytes; - bytes = sg->length; + bytes = sg->length + sg->offset; if (!iopgsz_ok(bytes)) { - pr_err("%s: sg[%d] not iommu pagesize(%x)\n", - __func__, i, bytes); + pr_err("%s: sg[%d] not iommu pagesize(%u %u)\n", + __func__, i, bytes, sg->offset); + return 0; + } + + if (i && sg->offset) { + pr_err("%s: sg[%d] offset not allowed in internal " + "entries\n", __func__, i); return 0; } @@ -164,8 +179,8 @@ static void *vmap_sg(const struct sg_table *sgt) u32 pa; int err; - pa = sg_phys(sg); - bytes = sg->length; + pa = sg_phys(sg) - sg->offset; + bytes = sg->length + sg->offset; BUG_ON(bytes != PAGE_SIZE); @@ -405,8 +420,8 @@ static int map_iovm_area(struct iommu_domain *domain, struct iovm_struct *new, u32 pa; size_t bytes; - pa = sg_phys(sg); - bytes = sg->length; + pa = sg_phys(sg) - sg->offset; + bytes = sg->length + sg->offset; flags &= ~IOVMF_PGSZ_MASK; @@ -600,7 +615,7 @@ u32 omap_iommu_vmap(struct iommu_domain *domain, struct omap_iommu *obj, u32 da, if (IS_ERR_VALUE(da)) vunmap_sg(va); - return da; + return da + sgtable_offset(sgt); } EXPORT_SYMBOL_GPL(omap_iommu_vmap); @@ -620,6 +635,7 @@ omap_iommu_vunmap(struct iommu_domain *domain, struct omap_iommu *obj, u32 da) * 'sgt' is allocated before 'omap_iommu_vmalloc()' is called. * Just returns 'sgt' to the caller to free */ + da &= PAGE_MASK; sgt = unmap_vm_area(domain, obj, da, vunmap_sg, IOVMF_DISCONT | IOVMF_MMIO); if (!sgt)