Message ID | 1406854484-3848-2-git-send-email-ohaugan@codeaurora.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Hi Olav, On Fri, Aug 01, 2014 at 01:54:44AM +0100, Olav Haugan wrote: > Mapping and unmapping are more often than not in the critical path. > map_sg and unmap_sg allows IOMMU driver implementations to optimize > the process of mapping and unmapping buffers into the IOMMU page tables. > > Instead of mapping a buffer one page at a time and requiring potentially > expensive TLB operations for each page, this function allows the driver > to map all pages in one go and defer TLB maintenance until after all > pages have been mapped. > > Additionally, the mapping operation would be faster in general since > clients does not have to keep calling map API over and over again for > each physically contiguous chunk of memory that needs to be mapped to a > virtually contiguous region. Just a couple of minor comments, but I think this is almost there now. > Signed-off-by: Olav Haugan <ohaugan@codeaurora.org> > --- > drivers/iommu/iommu.c | 44 ++++++++++++++++++++++++++++++++++++++++++++ > include/linux/iommu.h | 28 ++++++++++++++++++++++++++++ > 2 files changed, 72 insertions(+) > > diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c > index 1698360..1d5dc2e 100644 > --- a/drivers/iommu/iommu.c > +++ b/drivers/iommu/iommu.c > @@ -1088,6 +1088,50 @@ size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size) > } > EXPORT_SYMBOL_GPL(iommu_unmap); > > +int iommu_map_sg(struct iommu_domain *domain, unsigned long iova, > + struct scatterlist *sg, unsigned int nents, > + int prot, unsigned long flags) > +{ What do you anticipate passing in the flags parameter? I assume it's something specific to the scatterlist, since we can't provide this to iommu_map as it stands? > + int ret = 0; > + unsigned long offset = 0; > + > + if (unlikely(domain->ops->map_sg == NULL)) { > + unsigned int i; > + struct scatterlist *s; > + > + for_each_sg(sg, s, nents, i) { > + phys_addr_t phys = page_to_phys(sg_page(s)); > + size_t page_len = s->offset + s->length; > + > + ret = iommu_map(domain, iova + offset, phys, page_len, > + prot); > + if (ret) > + goto fail; > + > + offset += page_len; > + } > + } else { > + ret = domain->ops->map_sg(domain, iova, sg, nents, prot, flags); > + } > + goto out; > + > +fail: > + /* undo mappings already done in case of error */ > + iommu_unmap(domain, iova, offset); I think this would be cleaner if you stuck it in the loop above and removed all these labels: if (ret) { iommu_unmap(...); break; } Will -- To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Will, On 8/1/2014 1:22 AM, Will Deacon wrote: > Hi Olav, > > On Fri, Aug 01, 2014 at 01:54:44AM +0100, Olav Haugan wrote: >> Mapping and unmapping are more often than not in the critical path. >> map_sg and unmap_sg allows IOMMU driver implementations to optimize >> the process of mapping and unmapping buffers into the IOMMU page tables. >> >> Instead of mapping a buffer one page at a time and requiring potentially >> expensive TLB operations for each page, this function allows the driver >> to map all pages in one go and defer TLB maintenance until after all >> pages have been mapped. >> >> Additionally, the mapping operation would be faster in general since >> clients does not have to keep calling map API over and over again for >> each physically contiguous chunk of memory that needs to be mapped to a >> virtually contiguous region. > > Just a couple of minor comments, but I think this is almost there now. > >> Signed-off-by: Olav Haugan <ohaugan@codeaurora.org> >> --- >> drivers/iommu/iommu.c | 44 ++++++++++++++++++++++++++++++++++++++++++++ >> include/linux/iommu.h | 28 ++++++++++++++++++++++++++++ >> 2 files changed, 72 insertions(+) >> >> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c >> index 1698360..1d5dc2e 100644 >> --- a/drivers/iommu/iommu.c >> +++ b/drivers/iommu/iommu.c >> @@ -1088,6 +1088,50 @@ size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size) >> } >> EXPORT_SYMBOL_GPL(iommu_unmap); >> >> +int iommu_map_sg(struct iommu_domain *domain, unsigned long iova, >> + struct scatterlist *sg, unsigned int nents, >> + int prot, unsigned long flags) >> +{ > > What do you anticipate passing in the flags parameter? I assume it's > something specific to the scatterlist, since we can't provide this to > iommu_map as it stands? Initially the flags argument is planned to be used by clients to indicate to the driver that no TLB operation is necessary. This allows clients to for example map/unmap multiple scatter-gather lists without doing expensive TLB invalidate operations for each call but just do this at the last mapping/unmapping call instead. I believe Rob Clark was looking for this feature and I can see the benefit for our use cases also. >> + int ret = 0; >> + unsigned long offset = 0; >> + >> + if (unlikely(domain->ops->map_sg == NULL)) { >> + unsigned int i; >> + struct scatterlist *s; >> + >> + for_each_sg(sg, s, nents, i) { >> + phys_addr_t phys = page_to_phys(sg_page(s)); >> + size_t page_len = s->offset + s->length; >> + >> + ret = iommu_map(domain, iova + offset, phys, page_len, >> + prot); >> + if (ret) >> + goto fail; >> + >> + offset += page_len; >> + } >> + } else { >> + ret = domain->ops->map_sg(domain, iova, sg, nents, prot, flags); >> + } >> + goto out; >> + >> +fail: >> + /* undo mappings already done in case of error */ >> + iommu_unmap(domain, iova, offset); > > I think this would be cleaner if you stuck it in the loop above and removed > all these labels: > > if (ret) { > iommu_unmap(...); > break; > } Sure, I can do that. Thanks, Olav
Any more comments on this from anyone before I submit v5? On 8/1/2014 9:44 AM, Olav Haugan wrote: > Hi Will, > > On 8/1/2014 1:22 AM, Will Deacon wrote: >> Hi Olav, >> >> On Fri, Aug 01, 2014 at 01:54:44AM +0100, Olav Haugan wrote: >>> Mapping and unmapping are more often than not in the critical path. >>> map_sg and unmap_sg allows IOMMU driver implementations to optimize >>> the process of mapping and unmapping buffers into the IOMMU page tables. >>> >>> Instead of mapping a buffer one page at a time and requiring potentially >>> expensive TLB operations for each page, this function allows the driver >>> to map all pages in one go and defer TLB maintenance until after all >>> pages have been mapped. >>> >>> Additionally, the mapping operation would be faster in general since >>> clients does not have to keep calling map API over and over again for >>> each physically contiguous chunk of memory that needs to be mapped to a >>> virtually contiguous region. >> >> Just a couple of minor comments, but I think this is almost there now. >> >>> Signed-off-by: Olav Haugan <ohaugan@codeaurora.org> >>> --- >>> drivers/iommu/iommu.c | 44 ++++++++++++++++++++++++++++++++++++++++++++ >>> include/linux/iommu.h | 28 ++++++++++++++++++++++++++++ >>> 2 files changed, 72 insertions(+) >>> >>> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c >>> index 1698360..1d5dc2e 100644 >>> --- a/drivers/iommu/iommu.c >>> +++ b/drivers/iommu/iommu.c >>> @@ -1088,6 +1088,50 @@ size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size) >>> } >>> EXPORT_SYMBOL_GPL(iommu_unmap); >>> >>> +int iommu_map_sg(struct iommu_domain *domain, unsigned long iova, >>> + struct scatterlist *sg, unsigned int nents, >>> + int prot, unsigned long flags) >>> +{ >> >> What do you anticipate passing in the flags parameter? I assume it's >> something specific to the scatterlist, since we can't provide this to >> iommu_map as it stands? > > Initially the flags argument is planned to be used by clients to > indicate to the driver that no TLB operation is necessary. This allows > clients to for example map/unmap multiple scatter-gather lists without > doing expensive TLB invalidate operations for each call but just do this > at the last mapping/unmapping call instead. I believe Rob Clark was > looking for this feature and I can see the benefit for our use cases also. > >>> + int ret = 0; >>> + unsigned long offset = 0; >>> + >>> + if (unlikely(domain->ops->map_sg == NULL)) { >>> + unsigned int i; >>> + struct scatterlist *s; >>> + >>> + for_each_sg(sg, s, nents, i) { >>> + phys_addr_t phys = page_to_phys(sg_page(s)); >>> + size_t page_len = s->offset + s->length; >>> + >>> + ret = iommu_map(domain, iova + offset, phys, page_len, >>> + prot); >>> + if (ret) >>> + goto fail; >>> + >>> + offset += page_len; >>> + } >>> + } else { >>> + ret = domain->ops->map_sg(domain, iova, sg, nents, prot, flags); >>> + } >>> + goto out; >>> + >>> +fail: >>> + /* undo mappings already done in case of error */ >>> + iommu_unmap(domain, iova, offset); >> >> I think this would be cleaner if you stuck it in the loop above and removed >> all these labels: >> >> if (ret) { >> iommu_unmap(...); >> break; >> } > > Sure, I can do that. > > Thanks, > > Olav > Olav
On Thu, Jul 31, 2014 at 05:54:44PM -0700, Olav Haugan wrote: > Mapping and unmapping are more often than not in the critical path. > map_sg and unmap_sg allows IOMMU driver implementations to optimize > the process of mapping and unmapping buffers into the IOMMU page tables. > > Instead of mapping a buffer one page at a time and requiring potentially > expensive TLB operations for each page, this function allows the driver > to map all pages in one go and defer TLB maintenance until after all > pages have been mapped. > > Additionally, the mapping operation would be faster in general since > clients does not have to keep calling map API over and over again for > each physically contiguous chunk of memory that needs to be mapped to a > virtually contiguous region. That is assuming that physical == bus topology. > > Signed-off-by: Olav Haugan <ohaugan@codeaurora.org> > --- > drivers/iommu/iommu.c | 44 ++++++++++++++++++++++++++++++++++++++++++++ > include/linux/iommu.h | 28 ++++++++++++++++++++++++++++ > 2 files changed, 72 insertions(+) > > diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c > index 1698360..1d5dc2e 100644 > --- a/drivers/iommu/iommu.c > +++ b/drivers/iommu/iommu.c > @@ -1088,6 +1088,50 @@ size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size) > } > EXPORT_SYMBOL_GPL(iommu_unmap); > > +int iommu_map_sg(struct iommu_domain *domain, unsigned long iova, > + struct scatterlist *sg, unsigned int nents, > + int prot, unsigned long flags) > +{ > + int ret = 0; > + unsigned long offset = 0; > + > + if (unlikely(domain->ops->map_sg == NULL)) { > + unsigned int i; > + struct scatterlist *s; > + > + for_each_sg(sg, s, nents, i) { > + phys_addr_t phys = page_to_phys(sg_page(s)); > + size_t page_len = s->offset + s->length; > + > + ret = iommu_map(domain, iova + offset, phys, page_len, > + prot); > + if (ret) > + goto fail; > + > + offset += page_len; > + } I think it would be better if you had an 'default_iommu_map_sg' with the implementation above. And then the default ops->map_sg would point to that and each IOMMU would over-write with its own version. That way you don't need any of this 'if' and can have the 'iommu_map_sg' be in the header file (either as static inline or an macro). > + } else { > + ret = domain->ops->map_sg(domain, iova, sg, nents, prot, flags); > + } > + goto out; > + > +fail: > + /* undo mappings already done in case of error */ > + iommu_unmap(domain, iova, offset); > +out: > + return ret; > +} > +EXPORT_SYMBOL_GPL(iommu_map_sg); > + > +int iommu_unmap_sg(struct iommu_domain *domain, unsigned long iova, > + size_t size, unsigned long flags) > +{ > + if (unlikely(domain->ops->unmap_sg == NULL)) > + return iommu_unmap(domain, iova, size); > + else > + return domain->ops->unmap_sg(domain, iova, size, flags); > +} > +EXPORT_SYMBOL_GPL(iommu_unmap_sg); > > int iommu_domain_window_enable(struct iommu_domain *domain, u32 wnd_nr, > phys_addr_t paddr, u64 size, int prot) > diff --git a/include/linux/iommu.h b/include/linux/iommu.h > index 20f9a52..66ad543 100644 > --- a/include/linux/iommu.h > +++ b/include/linux/iommu.h > @@ -22,6 +22,7 @@ > #include <linux/errno.h> > #include <linux/err.h> > #include <linux/types.h> > +#include <linux/scatterlist.h> > #include <trace/events/iommu.h> > > #define IOMMU_READ (1 << 0) > @@ -93,6 +94,10 @@ enum iommu_attr { > * @detach_dev: detach device from an iommu domain > * @map: map a physically contiguous memory region to an iommu domain > * @unmap: unmap a physically contiguous memory region from an iommu domain > + * @map_sg: map a scatter-gather list of physically contiguous memory chunks > + * to an iommu domain > + * @unmap_sg: unmap a scatter-gather list of physically contiguous memory > + * chunks from an iommu domain > * @iova_to_phys: translate iova to physical address > * @domain_has_cap: domain capabilities query > * @add_device: add device to iommu grouping > @@ -110,6 +115,11 @@ struct iommu_ops { > phys_addr_t paddr, size_t size, int prot); > size_t (*unmap)(struct iommu_domain *domain, unsigned long iova, > size_t size); > + int (*map_sg)(struct iommu_domain *domain, unsigned long iova, > + struct scatterlist *sg, unsigned int nents, int prot, > + unsigned long flags); > + int (*unmap_sg)(struct iommu_domain *domain, unsigned long iova, > + size_t size, unsigned long flags); > phys_addr_t (*iova_to_phys)(struct iommu_domain *domain, dma_addr_t iova); > int (*domain_has_cap)(struct iommu_domain *domain, > unsigned long cap); > @@ -153,6 +163,11 @@ extern int iommu_map(struct iommu_domain *domain, unsigned long iova, > phys_addr_t paddr, size_t size, int prot); > extern size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, > size_t size); > +extern int iommu_map_sg(struct iommu_domain *domain, unsigned long iova, > + struct scatterlist *sg, unsigned int nents, int prot, > + unsigned long flags); > +extern int iommu_unmap_sg(struct iommu_domain *domain, unsigned long iova, > + size_t size, unsigned long flags); > extern phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova); > extern int iommu_domain_has_cap(struct iommu_domain *domain, > unsigned long cap); > @@ -287,6 +302,19 @@ static inline int iommu_unmap(struct iommu_domain *domain, unsigned long iova, > return -ENODEV; > } > > +static inline int iommu_map_sg(struct iommu_domain *domain, unsigned long iova, > + struct scatterlist *sg, unsigned int nents, int prot, > + unsigned long flags) > +{ > + return -ENODEV; > +} > + > +static inline int iommu_unmap_sg(struct iommu_domain *domain, > + unsigned long iova, size_t size, unsigned long flags) > +{ > + return -ENODEV; > +} > + > static inline int iommu_domain_window_enable(struct iommu_domain *domain, > u32 wnd_nr, phys_addr_t paddr, > u64 size, int prot) > -- > The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, > hosted by The Linux Foundation > > _______________________________________________ > iommu mailing list > iommu@lists.linux-foundation.org > https://lists.linuxfoundation.org/mailman/listinfo/iommu -- To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 8/5/2014 8:13 AM, Konrad Rzeszutek Wilk wrote: > On Thu, Jul 31, 2014 at 05:54:44PM -0700, Olav Haugan wrote: >> Mapping and unmapping are more often than not in the critical path. >> map_sg and unmap_sg allows IOMMU driver implementations to optimize >> the process of mapping and unmapping buffers into the IOMMU page tables. >> >> Instead of mapping a buffer one page at a time and requiring potentially >> expensive TLB operations for each page, this function allows the driver >> to map all pages in one go and defer TLB maintenance until after all >> pages have been mapped. >> >> Additionally, the mapping operation would be faster in general since >> clients does not have to keep calling map API over and over again for >> each physically contiguous chunk of memory that needs to be mapped to a >> virtually contiguous region. > > That is assuming that physical == bus topology. > >> >> Signed-off-by: Olav Haugan <ohaugan@codeaurora.org> >> --- >> drivers/iommu/iommu.c | 44 ++++++++++++++++++++++++++++++++++++++++++++ >> include/linux/iommu.h | 28 ++++++++++++++++++++++++++++ >> 2 files changed, 72 insertions(+) >> >> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c >> index 1698360..1d5dc2e 100644 >> --- a/drivers/iommu/iommu.c >> +++ b/drivers/iommu/iommu.c >> @@ -1088,6 +1088,50 @@ size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size) >> } >> EXPORT_SYMBOL_GPL(iommu_unmap); >> >> +int iommu_map_sg(struct iommu_domain *domain, unsigned long iova, >> + struct scatterlist *sg, unsigned int nents, >> + int prot, unsigned long flags) >> +{ >> + int ret = 0; >> + unsigned long offset = 0; >> + >> + if (unlikely(domain->ops->map_sg == NULL)) { >> + unsigned int i; >> + struct scatterlist *s; >> + >> + for_each_sg(sg, s, nents, i) { >> + phys_addr_t phys = page_to_phys(sg_page(s)); >> + size_t page_len = s->offset + s->length; >> + >> + ret = iommu_map(domain, iova + offset, phys, page_len, >> + prot); >> + if (ret) >> + goto fail; >> + >> + offset += page_len; >> + } > > I think it would be better if you had an 'default_iommu_map_sg' with > the implementation above. And then the default ops->map_sg would point to > that and each IOMMU would over-write with its own version. > > That way you don't need any of this 'if' and can have the 'iommu_map_sg' > be in the header file (either as static inline or an macro). so you are suggesting that I check in "bus_set_iommu()" whether the driver has set the map_sg/unmap_sg function pointers or not and if not set it to the default? Is bus_set_iommu() the only way drivers can set up the callbacks? > >> + } else { >> + ret = domain->ops->map_sg(domain, iova, sg, nents, prot, flags); >> + } >> + goto out; >> + >> +fail: >> + /* undo mappings already done in case of error */ >> + iommu_unmap(domain, iova, offset); >> +out: >> + return ret; >> +} >> +EXPORT_SYMBOL_GPL(iommu_map_sg); >> + >> +int iommu_unmap_sg(struct iommu_domain *domain, unsigned long iova, >> + size_t size, unsigned long flags) >> +{ >> + if (unlikely(domain->ops->unmap_sg == NULL)) >> + return iommu_unmap(domain, iova, size); >> + else >> + return domain->ops->unmap_sg(domain, iova, size, flags); >> +} >> +EXPORT_SYMBOL_GPL(iommu_unmap_sg); >> >> int iommu_domain_window_enable(struct iommu_domain *domain, u32 wnd_nr, >> phys_addr_t paddr, u64 size, int prot) >> diff --git a/include/linux/iommu.h b/include/linux/iommu.h >> index 20f9a52..66ad543 100644 >> --- a/include/linux/iommu.h >> +++ b/include/linux/iommu.h >> @@ -22,6 +22,7 @@ >> #include <linux/errno.h> >> #include <linux/err.h> >> #include <linux/types.h> >> +#include <linux/scatterlist.h> >> #include <trace/events/iommu.h> >> >> #define IOMMU_READ (1 << 0) >> @@ -93,6 +94,10 @@ enum iommu_attr { >> * @detach_dev: detach device from an iommu domain >> * @map: map a physically contiguous memory region to an iommu domain >> * @unmap: unmap a physically contiguous memory region from an iommu domain >> + * @map_sg: map a scatter-gather list of physically contiguous memory chunks >> + * to an iommu domain >> + * @unmap_sg: unmap a scatter-gather list of physically contiguous memory >> + * chunks from an iommu domain >> * @iova_to_phys: translate iova to physical address >> * @domain_has_cap: domain capabilities query >> * @add_device: add device to iommu grouping >> @@ -110,6 +115,11 @@ struct iommu_ops { >> phys_addr_t paddr, size_t size, int prot); >> size_t (*unmap)(struct iommu_domain *domain, unsigned long iova, >> size_t size); >> + int (*map_sg)(struct iommu_domain *domain, unsigned long iova, >> + struct scatterlist *sg, unsigned int nents, int prot, >> + unsigned long flags); >> + int (*unmap_sg)(struct iommu_domain *domain, unsigned long iova, >> + size_t size, unsigned long flags); >> phys_addr_t (*iova_to_phys)(struct iommu_domain *domain, dma_addr_t iova); >> int (*domain_has_cap)(struct iommu_domain *domain, >> unsigned long cap); >> @@ -153,6 +163,11 @@ extern int iommu_map(struct iommu_domain *domain, unsigned long iova, >> phys_addr_t paddr, size_t size, int prot); >> extern size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, >> size_t size); >> +extern int iommu_map_sg(struct iommu_domain *domain, unsigned long iova, >> + struct scatterlist *sg, unsigned int nents, int prot, >> + unsigned long flags); >> +extern int iommu_unmap_sg(struct iommu_domain *domain, unsigned long iova, >> + size_t size, unsigned long flags); >> extern phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova); >> extern int iommu_domain_has_cap(struct iommu_domain *domain, >> unsigned long cap); >> @@ -287,6 +302,19 @@ static inline int iommu_unmap(struct iommu_domain *domain, unsigned long iova, >> return -ENODEV; >> } >> >> +static inline int iommu_map_sg(struct iommu_domain *domain, unsigned long iova, >> + struct scatterlist *sg, unsigned int nents, int prot, >> + unsigned long flags) >> +{ >> + return -ENODEV; >> +} >> + >> +static inline int iommu_unmap_sg(struct iommu_domain *domain, >> + unsigned long iova, size_t size, unsigned long flags) >> +{ >> + return -ENODEV; >> +} >> + >> static inline int iommu_domain_window_enable(struct iommu_domain *domain, >> u32 wnd_nr, phys_addr_t paddr, >> u64 size, int prot) >> -- >> The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, >> hosted by The Linux Foundation >> >> _______________________________________________ >> iommu mailing list >> iommu@lists.linux-foundation.org >> https://lists.linuxfoundation.org/mailman/listinfo/iommu > > _______________________________________________ > linux-arm-kernel mailing list > linux-arm-kernel@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel > Olav
On Wed, Aug 06, 2014 at 10:08:55AM -0700, Olav Haugan wrote: > so you are suggesting that I check in "bus_set_iommu()" whether the > driver has set the map_sg/unmap_sg function pointers or not and if not > set it to the default? Is bus_set_iommu() the only way drivers can set > up the callbacks? This doesn't work as the iommu_ops are now const. You have to either update the iommu drivers individually to point to the default function, or you do the check in the API function itself and fall back to the default it no call-back is provided. Joerg -- To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 8/6/2014 1:17 PM, Joerg Roedel wrote: > On Wed, Aug 06, 2014 at 10:08:55AM -0700, Olav Haugan wrote: >> so you are suggesting that I check in "bus_set_iommu()" whether the >> driver has set the map_sg/unmap_sg function pointers or not and if not >> set it to the default? Is bus_set_iommu() the only way drivers can set >> up the callbacks? > > This doesn't work as the iommu_ops are now const. You have to either > update the iommu drivers individually to point to the default function, > or you do the check in the API function itself and fall back to the > default it no call-back is provided. > Ok, then I think it is better to just leave the fallback where it is now in the function itself. Thanks, Olav
On Wed, Aug 06, 2014 at 04:28:45PM -0700, Olav Haugan wrote: > On 8/6/2014 1:17 PM, Joerg Roedel wrote: > > On Wed, Aug 06, 2014 at 10:08:55AM -0700, Olav Haugan wrote: > >> so you are suggesting that I check in "bus_set_iommu()" whether the > >> driver has set the map_sg/unmap_sg function pointers or not and if not > >> set it to the default? Is bus_set_iommu() the only way drivers can set > >> up the callbacks? > > > > This doesn't work as the iommu_ops are now const. You have to either > > update the iommu drivers individually to point to the default function, > > or you do the check in the API function itself and fall back to the > > default it no call-back is provided. > > > > Ok, then I think it is better to just leave the fallback where it is now > in the function itself. What Konrad was suggesting is what I also proposed. The idea is to implement a fallback as standalone function, then make all drivers use that by default in the struct iommu_ops that they register. When drivers implement an optimized version they can simply replace the fallback implementation with their own. Thierry
On 8/6/2014 11:24 PM, Thierry Reding wrote: > On Wed, Aug 06, 2014 at 04:28:45PM -0700, Olav Haugan wrote: >> On 8/6/2014 1:17 PM, Joerg Roedel wrote: >>> On Wed, Aug 06, 2014 at 10:08:55AM -0700, Olav Haugan wrote: >>>> so you are suggesting that I check in "bus_set_iommu()" whether the >>>> driver has set the map_sg/unmap_sg function pointers or not and if not >>>> set it to the default? Is bus_set_iommu() the only way drivers can set >>>> up the callbacks? >>> >>> This doesn't work as the iommu_ops are now const. You have to either >>> update the iommu drivers individually to point to the default function, >>> or you do the check in the API function itself and fall back to the >>> default it no call-back is provided. >>> >> >> Ok, then I think it is better to just leave the fallback where it is now >> in the function itself. > > What Konrad was suggesting is what I also proposed. The idea is to > implement a fallback as standalone function, then make all drivers use > that by default in the struct iommu_ops that they register. When drivers > implement an optimized version they can simply replace the fallback > implementation with their own. > Ok, I can do that. I misunderstood the point of the fallback. I thought the point of the fallback was to catch drivers that forget/neglect to implement this callback. If that is not a concern I will update my patch to create a separate function that I will point all existing drivers to. Thanks, Olav
On Thu, Aug 07, 2014 at 02:52:56PM -0700, Olav Haugan wrote: > On 8/6/2014 11:24 PM, Thierry Reding wrote: > > On Wed, Aug 06, 2014 at 04:28:45PM -0700, Olav Haugan wrote: > >> On 8/6/2014 1:17 PM, Joerg Roedel wrote: > >>> On Wed, Aug 06, 2014 at 10:08:55AM -0700, Olav Haugan wrote: > >>>> so you are suggesting that I check in "bus_set_iommu()" whether the > >>>> driver has set the map_sg/unmap_sg function pointers or not and if not > >>>> set it to the default? Is bus_set_iommu() the only way drivers can set > >>>> up the callbacks? > >>> > >>> This doesn't work as the iommu_ops are now const. You have to either > >>> update the iommu drivers individually to point to the default function, > >>> or you do the check in the API function itself and fall back to the > >>> default it no call-back is provided. > >>> > >> > >> Ok, then I think it is better to just leave the fallback where it is now > >> in the function itself. > > > > What Konrad was suggesting is what I also proposed. The idea is to > > implement a fallback as standalone function, then make all drivers use > > that by default in the struct iommu_ops that they register. When drivers > > implement an optimized version they can simply replace the fallback > > implementation with their own. > > > > Ok, I can do that. I misunderstood the point of the fallback. I thought > the point of the fallback was to catch drivers that forget/neglect to > implement this callback. If that is not a concern I will update my patch Nah. We want those drivers to crash and burn so we can see that and fix it. And by fix I meant it would just point do: .map_sg = generic_map_sg, .unmap_sg = generic_unmap_sg, In other words, none of the function ops will have an NULL functions. > to create a separate function that I will point all existing drivers to. Excellent! > > Thanks, > > Olav > > -- > The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, > hosted by The Linux Foundation -- To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 1698360..1d5dc2e 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -1088,6 +1088,50 @@ size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size) } EXPORT_SYMBOL_GPL(iommu_unmap); +int iommu_map_sg(struct iommu_domain *domain, unsigned long iova, + struct scatterlist *sg, unsigned int nents, + int prot, unsigned long flags) +{ + int ret = 0; + unsigned long offset = 0; + + if (unlikely(domain->ops->map_sg == NULL)) { + unsigned int i; + struct scatterlist *s; + + for_each_sg(sg, s, nents, i) { + phys_addr_t phys = page_to_phys(sg_page(s)); + size_t page_len = s->offset + s->length; + + ret = iommu_map(domain, iova + offset, phys, page_len, + prot); + if (ret) + goto fail; + + offset += page_len; + } + } else { + ret = domain->ops->map_sg(domain, iova, sg, nents, prot, flags); + } + goto out; + +fail: + /* undo mappings already done in case of error */ + iommu_unmap(domain, iova, offset); +out: + return ret; +} +EXPORT_SYMBOL_GPL(iommu_map_sg); + +int iommu_unmap_sg(struct iommu_domain *domain, unsigned long iova, + size_t size, unsigned long flags) +{ + if (unlikely(domain->ops->unmap_sg == NULL)) + return iommu_unmap(domain, iova, size); + else + return domain->ops->unmap_sg(domain, iova, size, flags); +} +EXPORT_SYMBOL_GPL(iommu_unmap_sg); int iommu_domain_window_enable(struct iommu_domain *domain, u32 wnd_nr, phys_addr_t paddr, u64 size, int prot) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 20f9a52..66ad543 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -22,6 +22,7 @@ #include <linux/errno.h> #include <linux/err.h> #include <linux/types.h> +#include <linux/scatterlist.h> #include <trace/events/iommu.h> #define IOMMU_READ (1 << 0) @@ -93,6 +94,10 @@ enum iommu_attr { * @detach_dev: detach device from an iommu domain * @map: map a physically contiguous memory region to an iommu domain * @unmap: unmap a physically contiguous memory region from an iommu domain + * @map_sg: map a scatter-gather list of physically contiguous memory chunks + * to an iommu domain + * @unmap_sg: unmap a scatter-gather list of physically contiguous memory + * chunks from an iommu domain * @iova_to_phys: translate iova to physical address * @domain_has_cap: domain capabilities query * @add_device: add device to iommu grouping @@ -110,6 +115,11 @@ struct iommu_ops { phys_addr_t paddr, size_t size, int prot); size_t (*unmap)(struct iommu_domain *domain, unsigned long iova, size_t size); + int (*map_sg)(struct iommu_domain *domain, unsigned long iova, + struct scatterlist *sg, unsigned int nents, int prot, + unsigned long flags); + int (*unmap_sg)(struct iommu_domain *domain, unsigned long iova, + size_t size, unsigned long flags); phys_addr_t (*iova_to_phys)(struct iommu_domain *domain, dma_addr_t iova); int (*domain_has_cap)(struct iommu_domain *domain, unsigned long cap); @@ -153,6 +163,11 @@ extern int iommu_map(struct iommu_domain *domain, unsigned long iova, phys_addr_t paddr, size_t size, int prot); extern size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size); +extern int iommu_map_sg(struct iommu_domain *domain, unsigned long iova, + struct scatterlist *sg, unsigned int nents, int prot, + unsigned long flags); +extern int iommu_unmap_sg(struct iommu_domain *domain, unsigned long iova, + size_t size, unsigned long flags); extern phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova); extern int iommu_domain_has_cap(struct iommu_domain *domain, unsigned long cap); @@ -287,6 +302,19 @@ static inline int iommu_unmap(struct iommu_domain *domain, unsigned long iova, return -ENODEV; } +static inline int iommu_map_sg(struct iommu_domain *domain, unsigned long iova, + struct scatterlist *sg, unsigned int nents, int prot, + unsigned long flags) +{ + return -ENODEV; +} + +static inline int iommu_unmap_sg(struct iommu_domain *domain, + unsigned long iova, size_t size, unsigned long flags) +{ + return -ENODEV; +} + static inline int iommu_domain_window_enable(struct iommu_domain *domain, u32 wnd_nr, phys_addr_t paddr, u64 size, int prot)
Mapping and unmapping are more often than not in the critical path. map_sg and unmap_sg allows IOMMU driver implementations to optimize the process of mapping and unmapping buffers into the IOMMU page tables. Instead of mapping a buffer one page at a time and requiring potentially expensive TLB operations for each page, this function allows the driver to map all pages in one go and defer TLB maintenance until after all pages have been mapped. Additionally, the mapping operation would be faster in general since clients does not have to keep calling map API over and over again for each physically contiguous chunk of memory that needs to be mapped to a virtually contiguous region. Signed-off-by: Olav Haugan <ohaugan@codeaurora.org> --- drivers/iommu/iommu.c | 44 ++++++++++++++++++++++++++++++++++++++++++++ include/linux/iommu.h | 28 ++++++++++++++++++++++++++++ 2 files changed, 72 insertions(+)