Message ID | 20231226200205.562565-1-pasha.tatashin@soleen.com (mailing list archive) |
---|---|
Headers | show |
Series | IOMMU memory observability | expand |
On Tue, Dec 26, 2023 at 08:01:55PM +0000, Pasha Tatashin wrote: > ---------------------------------------------------------------------- > Changelog > ---------------------------------------------------------------------- > v3: > - Sync with v6.7-rc7 > - Addressed comments from David Rientjes: s/pages/page/, added unlikely() into > the branches, expanded comment for iommu_free_pages_list(). > - Added Acked-bys: David Rientjes > > v2: > - Added Reviewed-by Janne Grunau > - Sync with 6.7.0-rc3, 3b47bc037bd44f142ac09848e8d3ecccc726be99 > - Separated form the series patches: > vhost-vdpa: account iommu allocations > https://lore.kernel.org/all/20231130200447.2319543-1-pasha.tatashin@soleen.com > vfio: account iommu allocations > https://lore.kernel.org/all/20231130200900.2320829-1-pasha.tatashin@soleen.com > as suggested by Jason Gunthorpe > - Fixed SPARC build issue detected by kernel test robot > - Drop the following patches as they do account iommu page tables: > iommu/dma: use page allocation function provided by iommu-pages.h > iommu/fsl: use page allocation function provided by iommu-pages.h > iommu/iommufd: use page allocation function provided by iommu-pages.h > as suggested by Robin Murphy. These patches are not related to IOMMU > page tables. We might need to do a separate work to support DMA > observability. > - Remove support iommu/io-pgtable-arm-v7s as the 2nd level pages are > under a page size, thanks Robin Murphy for pointing this out. > > ---------------------------------------------------------------------- > Description > ---------------------------------------------------------------------- > IOMMU subsystem may contain state that is in gigabytes. Majority of that > state is iommu page tables. Yet, there is currently, no way to observe > how much memory is actually used by the iommu subsystem. > > This patch series solves this problem by adding both observability to > all pages that are allocated by IOMMU, and also accountability, so > admins can limit the amount if via cgroups. > > The system-wide observability is using /proc/meminfo: > SecPageTables: 438176 kB > > Contains IOMMU and KVM memory. > > Per-node observability: > /sys/devices/system/node/nodeN/meminfo > Node N SecPageTables: 422204 kB > > Contains IOMMU and KVM memory memory in the given NUMA node. > > Per-node IOMMU only observability: > /sys/devices/system/node/nodeN/vmstat > nr_iommu_pages 105555 > > Contains number of pages IOMMU allocated in the given node. > > Accountability: using sec_pagetables cgroup-v2 memory.stat entry. > > With the change, iova_stress[1] stops as limit is reached: > > # ./iova_stress > iova space: 0T free memory: 497G > iova space: 1T free memory: 495G > iova space: 2T free memory: 493G > iova space: 3T free memory: 491G > > stops as limit is reached. > > This series encorporates suggestions that came from the discussion > at LPC [2]. > ---------------------------------------------------------------------- > [1] https://github.com/soleen/iova_stress > [2] https://lpc.events/event/17/contributions/1466 > ---------------------------------------------------------------------- > Previous versions > v1: https://lore.kernel.org/all/20231128204938.1453583-1-pasha.tatashin@soleen.com > v2: https://lore.kernel.org/linux-mm/20231130201504.2322355-1-pasha.tatashin@soleen.com > ---------------------------------------------------------------------- > First of all, Merry Christmas and Happy New Year for all! And for this series, no observable regressions when booting the kernel with the series applied. Tested-by: Bagas Sanjaya <bagasdotme@gmail.com>
> > This series encorporates suggestions that came from the discussion > > at LPC [2]. > > ---------------------------------------------------------------------- > > [1] https://github.com/soleen/iova_stress > > [2] https://lpc.events/event/17/contributions/1466 > > ---------------------------------------------------------------------- > > Previous versions > > v1: https://lore.kernel.org/all/20231128204938.1453583-1-pasha.tatashin@soleen.com > > v2: https://lore.kernel.org/linux-mm/20231130201504.2322355-1-pasha.tatashin@soleen.com > > ---------------------------------------------------------------------- > > > > First of all, Merry Christmas and Happy New Year for all! > > And for this series, no observable regressions when booting the kernel with > the series applied. > > Tested-by: Bagas Sanjaya <bagasdotme@gmail.com> Thank you for testing. Pasha
Hello. On Tue, Dec 26, 2023 at 08:01:55PM +0000, Pasha Tatashin <pasha.tatashin@soleen.com> wrote: > This patch series solves this problem by adding both observability to > all pages that are allocated by IOMMU, and also accountability, so > admins can limit the amount if via cgroups. Maybe this is a mismatch in vocabulary what you mean by the verb "limit". But I don't see in the patchset that the offending pages would be allocated with GFP_ACCOUNT. So the result is that the pages are accounted (you can view the amount in memory.stat) but they are not subject to memcg limits. Is that what you intend? Regards, Michal
On Thu, Jan 4, 2024 at 10:31 AM Michal Koutný <mkoutny@suse.com> wrote: > > Hello. > > On Tue, Dec 26, 2023 at 08:01:55PM +0000, Pasha Tatashin <pasha.tatashin@soleen.com> wrote: > > This patch series solves this problem by adding both observability to > > all pages that are allocated by IOMMU, and also accountability, so > > admins can limit the amount if via cgroups. > > Maybe this is a mismatch in vocabulary what you mean by the verb > "limit". But I don't see in the patchset that the offending pages would > be allocated with GFP_ACCOUNT. So the result is that the pages are > accounted (you can view the amount in memory.stat) but they are not > subject to memcg limits. > > Is that what you intend? Hi Michal, Thank you for taking a look at this. The two patches [1] [2] which add GFP_KERNEL_ACCOUNT were sent separate from this series at request of reviewers: Pasha [1] https://lore.kernel.org/linux-mm/20231226182827.294158-1-pasha.tatashin@soleen.com [2] https://lore.kernel.org/linux-mm/20231130200900.2320829-1-pasha.tatashin@soleen.com > > > Regards, > Michal
On Thu, Jan 04, 2024 at 11:29:43AM -0500, Pasha Tatashin <pasha.tatashin@soleen.com> wrote: > Thank you for taking a look at this. The two patches [1] [2] which add > GFP_KERNEL_ACCOUNT were sent separate from this series at request of > reviewers: Ah, I didn't catch that. Though, I mean the patch 02/10 calls iommu_alloc_pages() with GFP_KERNEL (and not a passed gfp from iommu_map). Then patch 09/10 accounts all iommu_alloc_pages() under NR_IOMMU_PAGES. I think there is a difference between what's shown NR_IOMMU_PAGES and what will have __GFP_ACCOUNT because of that. I.e. is it the intention that this difference is not subject to limiting? (Note: I'm not familiar with iommu code and moreover I'm only looking at the two patch sets, not the complete code applied. So you may correct my reasoning.) Thanks, Michal
On Thu, Jan 4, 2024 at 12:04 PM Michal Koutný <mkoutny@suse.com> wrote: > > On Thu, Jan 04, 2024 at 11:29:43AM -0500, Pasha Tatashin <pasha.tatashin@soleen.com> wrote: > > Thank you for taking a look at this. The two patches [1] [2] which add > > GFP_KERNEL_ACCOUNT were sent separate from this series at request of > > reviewers: > > Ah, I didn't catch that. > > Though, I mean the patch 02/10 calls iommu_alloc_pages() with GFP_KERNEL > (and not a passed gfp from iommu_map). > Then patch 09/10 accounts all iommu_alloc_pages() under NR_IOMMU_PAGES. > > I think there is a difference between what's shown NR_IOMMU_PAGES and > what will have __GFP_ACCOUNT because of that. > > I.e. is it the intention that this difference is not subject to > limiting? Yes, we will have a difference between GFP_ACCOUNT and what NR_IOMMU_PAGES shows. GFP_ACCOUNT is set only where it makes sense to charge to user processes, i.e. IOMMU Page Tables, but there more IOMMU shared data that should not really be charged to a specific process. The charged and uncharged data will be visible via /proc/vmstat nr_iommu_pages field. Pasha > > (Note: I'm not familiar with iommu code and moreover I'm only looking at > the two patch sets, not the complete code applied. So you may correct my > reasoning.) > > > Thanks, > Michal
On Thu, Jan 04, 2024 at 02:12:26PM -0500, Pasha Tatashin <pasha.tatashin@soleen.com> wrote: > Yes, we will have a difference between GFP_ACCOUNT and what > NR_IOMMU_PAGES shows. GFP_ACCOUNT is set only where it makes sense to > charge to user processes, i.e. IOMMU Page Tables, but there more IOMMU > shared data that should not really be charged to a specific process. I see. I'd suggest adding this explanation to commit 10/10 message (perhaps with some ballpark numbers of pages). In order to have a reference and understadning if someone decided to charge (and limit) all in the future. Thanks, Michal
On Fri, Jan 5, 2024 at 4:02 AM Michal Koutný <mkoutny@suse.com> wrote: > > On Thu, Jan 04, 2024 at 02:12:26PM -0500, Pasha Tatashin <pasha.tatashin@soleen.com> wrote: > > Yes, we will have a difference between GFP_ACCOUNT and what > > NR_IOMMU_PAGES shows. GFP_ACCOUNT is set only where it makes sense to > > charge to user processes, i.e. IOMMU Page Tables, but there more IOMMU > > shared data that should not really be charged to a specific process. > > I see. I'd suggest adding this explanation to commit 10/10 message > (perhaps with some ballpark numbers of pages). In order to have a > reference and understadning if someone decided to charge (and limit) all > in the future. Sure, I will update the commit log in 10/10 with this info if we will have v4. Pasha > > Thanks, > Michal