diff mbox series

[06/16] iommu/dma: use page allocation function provided by iommu-pages.h

Message ID 20231128204938.1453583-7-pasha.tatashin@soleen.com (mailing list archive)
State New, archived
Headers show
Series IOMMU memory observability | expand

Commit Message

Pasha Tatashin Nov. 28, 2023, 8:49 p.m. UTC
Convert iommu/dma-iommu.c to use the new page allocation functions
provided in iommu-pages.h.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 drivers/iommu/dma-iommu.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

Comments

Robin Murphy Nov. 28, 2023, 10:33 p.m. UTC | #1
On 2023-11-28 8:49 pm, Pasha Tatashin wrote:
> Convert iommu/dma-iommu.c to use the new page allocation functions
> provided in iommu-pages.h.

These have nothing to do with IOMMU pagetables, they are DMA buffers and 
they belong to whoever called the corresponding dma_alloc_* function.

Thanks,
Robin.

> Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
> ---
>   drivers/iommu/dma-iommu.c | 8 +++++---
>   1 file changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 85163a83df2f..822adad464c2 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -31,6 +31,7 @@
>   #include <linux/vmalloc.h>
>   
>   #include "dma-iommu.h"
> +#include "iommu-pages.h"
>   
>   struct iommu_dma_msi_page {
>   	struct list_head	list;
> @@ -874,7 +875,7 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
>   static void __iommu_dma_free_pages(struct page **pages, int count)
>   {
>   	while (count--)
> -		__free_page(pages[count]);
> +		__iommu_free_page(pages[count]);
>   	kvfree(pages);
>   }
>   
> @@ -912,7 +913,8 @@ static struct page **__iommu_dma_alloc_pages(struct device *dev,
>   			order_size = 1U << order;
>   			if (order_mask > order_size)
>   				alloc_flags |= __GFP_NORETRY;
> -			page = alloc_pages_node(nid, alloc_flags, order);
> +			page = __iommu_alloc_pages_node(nid, alloc_flags,
> +							order);
>   			if (!page)
>   				continue;
>   			if (order)
> @@ -1572,7 +1574,7 @@ static void *iommu_dma_alloc_pages(struct device *dev, size_t size,
>   
>   	page = dma_alloc_contiguous(dev, alloc_size, gfp);
>   	if (!page)
> -		page = alloc_pages_node(node, gfp, get_order(alloc_size));
> +		page = __iommu_alloc_pages_node(node, gfp, get_order(alloc_size));
>   	if (!page)
>   		return NULL;
>
Pasha Tatashin Nov. 28, 2023, 10:50 p.m. UTC | #2
On Tue, Nov 28, 2023 at 5:34 PM Robin Murphy <robin.murphy@arm.com> wrote:
>
> On 2023-11-28 8:49 pm, Pasha Tatashin wrote:
> > Convert iommu/dma-iommu.c to use the new page allocation functions
> > provided in iommu-pages.h.
>
> These have nothing to do with IOMMU pagetables, they are DMA buffers and
> they belong to whoever called the corresponding dma_alloc_* function.

Hi Robin,

This is true, however, we want to account and observe the pages
allocated by IOMMU subsystem for DMA buffers, as they are essentially
unmovable locked pages. Should we separate IOMMU memory from KVM
memory all together and add another field to /proc/meminfo, something
like "iommu -> iommu pagetable and dma memory", or do we want to
export DMA memory separately from IOMMU page tables?

Since, I included DMA memory, I specifically removed mentioning of
IOMMU page tables in the most of places, and only report it as IOMMU
memory. However, since it is still bundled together with SecPageTables
it can be confusing.

Pasha
Robin Murphy Nov. 28, 2023, 10:59 p.m. UTC | #3
On 2023-11-28 10:50 pm, Pasha Tatashin wrote:
> On Tue, Nov 28, 2023 at 5:34 PM Robin Murphy <robin.murphy@arm.com> wrote:
>>
>> On 2023-11-28 8:49 pm, Pasha Tatashin wrote:
>>> Convert iommu/dma-iommu.c to use the new page allocation functions
>>> provided in iommu-pages.h.
>>
>> These have nothing to do with IOMMU pagetables, they are DMA buffers and
>> they belong to whoever called the corresponding dma_alloc_* function.
> 
> Hi Robin,
> 
> This is true, however, we want to account and observe the pages
> allocated by IOMMU subsystem for DMA buffers, as they are essentially
> unmovable locked pages. Should we separate IOMMU memory from KVM
> memory all together and add another field to /proc/meminfo, something
> like "iommu -> iommu pagetable and dma memory", or do we want to
> export DMA memory separately from IOMMU page tables?

These are not allocated by "the IOMMU subsystem", they are allocated by 
the DMA API. Even if you want to claim that a driver pinning memory via 
iommu_dma_ops is somehow different from the same driver pinning the same 
amount of memory via dma-direct when iommu.passthrough=1, it's still 
nonsense because you're failing to account the pages which iommu_dma_ops 
gets from CMA, dma_common_alloc_pages(), dynamic SWIOTLB, the various 
pools, and so on.

Thanks,
Robin.

> Since, I included DMA memory, I specifically removed mentioning of
> IOMMU page tables in the most of places, and only report it as IOMMU
> memory. However, since it is still bundled together with SecPageTables
> it can be confusing.
> 
> Pasha
Pasha Tatashin Nov. 28, 2023, 11:06 p.m. UTC | #4
On Tue, Nov 28, 2023 at 5:59 PM Robin Murphy <robin.murphy@arm.com> wrote:
>
> On 2023-11-28 10:50 pm, Pasha Tatashin wrote:
> > On Tue, Nov 28, 2023 at 5:34 PM Robin Murphy <robin.murphy@arm.com> wrote:
> >>
> >> On 2023-11-28 8:49 pm, Pasha Tatashin wrote:
> >>> Convert iommu/dma-iommu.c to use the new page allocation functions
> >>> provided in iommu-pages.h.
> >>
> >> These have nothing to do with IOMMU pagetables, they are DMA buffers and
> >> they belong to whoever called the corresponding dma_alloc_* function.
> >
> > Hi Robin,
> >
> > This is true, however, we want to account and observe the pages
> > allocated by IOMMU subsystem for DMA buffers, as they are essentially
> > unmovable locked pages. Should we separate IOMMU memory from KVM
> > memory all together and add another field to /proc/meminfo, something
> > like "iommu -> iommu pagetable and dma memory", or do we want to
> > export DMA memory separately from IOMMU page tables?
>
> These are not allocated by "the IOMMU subsystem", they are allocated by
> the DMA API. Even if you want to claim that a driver pinning memory via
> iommu_dma_ops is somehow different from the same driver pinning the same
> amount of memory via dma-direct when iommu.passthrough=1, it's still
> nonsense because you're failing to account the pages which iommu_dma_ops
> gets from CMA, dma_common_alloc_pages(), dynamic SWIOTLB, the various
> pools, and so on.
>
> Thanks,
> Robin.
>
> > Since, I included DMA memory, I specifically removed mentioning of
> > IOMMU page tables in the most of places, and only report it as IOMMU
> > memory. However, since it is still bundled together with SecPageTables
> > it can be confusing.
> >
> > Pasha
Pasha Tatashin Nov. 28, 2023, 11:08 p.m. UTC | #5
> > This is true, however, we want to account and observe the pages
> > allocated by IOMMU subsystem for DMA buffers, as they are essentially
> > unmovable locked pages. Should we separate IOMMU memory from KVM
> > memory all together and add another field to /proc/meminfo, something
> > like "iommu -> iommu pagetable and dma memory", or do we want to
> > export DMA memory separately from IOMMU page tables?
>
> These are not allocated by "the IOMMU subsystem", they are allocated by
> the DMA API. Even if you want to claim that a driver pinning memory via
> iommu_dma_ops is somehow different from the same driver pinning the same
> amount of memory via dma-direct when iommu.passthrough=1, it's still
> nonsense because you're failing to account the pages which iommu_dma_ops
> gets from CMA, dma_common_alloc_pages(), dynamic SWIOTLB, the various
> pools, and so on.

I see, IOMMU variants are used only for discontiguous allocations, and
the common ones are defined outside of driver/iommu. Alright, I can
remove all the changes for all no-page table related IOMMU
allocations.

Pasha
diff mbox series

Patch

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 85163a83df2f..822adad464c2 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -31,6 +31,7 @@ 
 #include <linux/vmalloc.h>
 
 #include "dma-iommu.h"
+#include "iommu-pages.h"
 
 struct iommu_dma_msi_page {
 	struct list_head	list;
@@ -874,7 +875,7 @@  static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
 static void __iommu_dma_free_pages(struct page **pages, int count)
 {
 	while (count--)
-		__free_page(pages[count]);
+		__iommu_free_page(pages[count]);
 	kvfree(pages);
 }
 
@@ -912,7 +913,8 @@  static struct page **__iommu_dma_alloc_pages(struct device *dev,
 			order_size = 1U << order;
 			if (order_mask > order_size)
 				alloc_flags |= __GFP_NORETRY;
-			page = alloc_pages_node(nid, alloc_flags, order);
+			page = __iommu_alloc_pages_node(nid, alloc_flags,
+							order);
 			if (!page)
 				continue;
 			if (order)
@@ -1572,7 +1574,7 @@  static void *iommu_dma_alloc_pages(struct device *dev, size_t size,
 
 	page = dma_alloc_contiguous(dev, alloc_size, gfp);
 	if (!page)
-		page = alloc_pages_node(node, gfp, get_order(alloc_size));
+		page = __iommu_alloc_pages_node(node, gfp, get_order(alloc_size));
 	if (!page)
 		return NULL;