mbox series

[RFC,0/7] Huge page-table entries for TTM

Message ID 20191127083120.34611-1-thomas_os@shipmail.org (mailing list archive)
Headers show
Series Huge page-table entries for TTM | expand

Message

Thomas Hellström (Intel) Nov. 27, 2019, 8:31 a.m. UTC
In order to save TLB space and CPU usage this patchset enables huge- and giant
page-table entries for TTM and TTM-enabled graphics drivers.

Patch 1 introduces a vma_is_special_huge() function to make the mm code
take the same path as DAX when splitting huge- and giant page table entries,
(which is zapping the page-table entry and rely on re-faulting).

Patch 2 makes the mm code split existing huge page-table entries
on huge_fault fallbacks. Typically on COW or on buffer-objects that want
write-notify. COW and write-notification is always done on the lowest
page-table level. See the patch log message for additional considerations.

Patch 3 introduces functions to allow the graphics drivers to manipulate
the caching- and encryption flags of huge page-table entries without ugly
hacks.

Patch 4 implements the huge_fault handler in TTM.

This enables huge page-table entries, provided that the kernel is configured
to support transhuge pages, either by default or using madvise().
However, they are unlikely to be inserted unless the kernel buffer object
pfns and user-space addresses align perfectly. There are various options
here, but since buffer objects that reside in system pages typically start
at huge page boundaries if they are backed by huge pages, we try to enforce
buffer object starting pfns and user-space addresses to be huge page-size
aligned if their size exceeds a huge page-size. If pud-size transhuge
("giant") pages are enabled by the arch, the same holds for those.

Patch 5 implements a drm helper to align user-space addresses according
to the above scheme, if possible.

Patch 6 implements a TTM range manager that does the same for graphics IO
memory.

Patch 7 finally hooks up the helpers of patch 5 and 6 to the vmwgfx driver.
A similar change is needed for graphics drivers that wants a reasonable
likelyhood of actually using huge page-table entries.

Finally, if a buffer object size is not huge-page or giant-page aligned,
its size will NOT be inflated by this patchset. This means that the buffer
object tail will use smaller size page-table entries and thus no memory
overhead occurs. Drivers that want to pay the memory overhead price need to
implement their own scheme to inflate buffer-object sizes.

PMD size huge page-table-entries have been tested with vmwgfx and found to
work well both with system memory backed and IO memory backed buffer objects.

PUD size giant page-table-entries have seen limited (fault and COW) testing
using a modified kernel and a fake vmwgfx TTM memory type. The vmwgfx driver
does otherwise not support 1GB-size IO memory resources.

Comments and suggestions welcome.
Thomas


Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: "Christian König" <christian.koenig@amd.com>