mbox series

[RFC,0/4] x86/mm/cpa: merge small mappings whenever possible

Message ID 20220808145649.2261258-1-aaron.lu@intel.com (mailing list archive)
Headers show
Series x86/mm/cpa: merge small mappings whenever possible | expand

Message

Aaron Lu Aug. 8, 2022, 2:56 p.m. UTC
This is an early RFC. While all reviews are welcome, reviewing this code
now will be a waste of time for the x86 subsystem maintainers. I would,
however, appreciate a preliminary review from the folks on the to and cc
list. I'm posting it to the list in case anyone else is interested in
seeing this early version.

Dave Hansen: I need your ack before this goes to the maintainers.

Here it goes:

On x86_64, Linux has direct mapping of almost all physical memory. For
performance reasons, this mapping is usually set as large page like 2M
or 1G per hardware's capability with read, write and non-execute
protection.

There are cases where some pages have to change their protection to RO
and eXecutable, like pages that host module code or bpf prog. When these
pages' protection are changed, the corresponding large mapping that
cover these pages will have to be splitted into 4K first and then
individual 4k page's protection changed accordingly, i.e. unaffected
pages keep their original protection as RW and NX while affected pages'
protection changed to RO and X.

There is a problem due to this split: the large mapping will remain
splitted even after the affected pages' protection are changed back to
RW and NX, like when the module is unloaded or bpf progs are freed.
After system runs a long time, there can be more and more large mapping
being splitted, causing more and more dTLB misses and overall system
performance getting hurt[1].

For this reason, people tried some techniques to reduce the harm of
large mapping beling splitted, like bpf_prog_pack[2] which packs
multiple bpf progs into a single page instead of allocating and changing
one page's protection for each bpf prog. This approach made large
mapping split happen much fewer.

This patchset addresses this problem in another way: it merges
splitted mappings back to a large mapping when protections of all entries
of the splitted small mapping page table become same again, e.g. when the
page whose protection was changed to RO+X now has its protection changed
back to RW+NX due to reasons like module unload, bpf prog free, etc. and
all other entries' protection are also RW+NX.

One final note is, with features like bpf_prog_pack etc., there can be
much fewer large mapping split IIUC; also, this patchset can not help
when the page which has its protection changed keeps in use. So my take
on this large mapping split problem is: to get the most value of keeping
large mapping intact, features like bpf_prog_pack is important. This
patchset can help to further reduce large mapping split when in use page
that has special protection set finally gets released.

[1]: http://lkml.kernel.org/r/CAPhsuW4eAm9QrAxhZMJu-bmvHnjWjuw86gFZzTHRaMEaeFhAxw@mail.gmail.com
[2]: https://lore.kernel.org/lkml/20220204185742.271030-1-song@kernel.org/

Aaron Lu (4):
  x86/mm/cpa: restore global bit when page is present
  x86/mm/cpa: merge splitted direct mapping when possible
  x86/mm/cpa: add merge event counter
  x86/mm/cpa: add a test interface to split direct map

 arch/x86/mm/pat/set_memory.c  | 411 +++++++++++++++++++++++++++++++++-
 include/linux/mm_types.h      |   6 +
 include/linux/page-flags.h    |   6 +
 include/linux/vm_event_item.h |   2 +
 mm/vmstat.c                   |   2 +
 5 files changed, 420 insertions(+), 7 deletions(-)

Comments

Kirill A. Shutemov Aug. 9, 2022, 10:04 a.m. UTC | #1
On Mon, Aug 08, 2022 at 10:56:45PM +0800, Aaron Lu wrote:
> This is an early RFC. While all reviews are welcome, reviewing this code
> now will be a waste of time for the x86 subsystem maintainers. I would,
> however, appreciate a preliminary review from the folks on the to and cc
> list. I'm posting it to the list in case anyone else is interested in
> seeing this early version.

Last time[1] I tried to merge pages back in direct mapping it lead to
substantial performance regression for some workloads. I cannot find the
report right now, but I remember it was something graphics related.

Have you done any performance evaluation?

My take away was that the merge has to be batched. Like log where changes
to direct mapping happens and come back to then and merge when the number
of changes cross some limit.

Also I don't see you handling set_memory_4k(). Huh?

[1] https://lore.kernel.org/lkml/20200416213229.19174-1-kirill.shutemov@linux.intel.com/
Aaron Lu Aug. 9, 2022, 2:58 p.m. UTC | #2
Hi Kirill,

Thanks a lot for the feedback.

On 8/9/2022 6:04 PM, Kirill A. Shutemov wrote:
> On Mon, Aug 08, 2022 at 10:56:45PM +0800, Aaron Lu wrote:
>> This is an early RFC. While all reviews are welcome, reviewing this code
>> now will be a waste of time for the x86 subsystem maintainers. I would,
>> however, appreciate a preliminary review from the folks on the to and cc
>> list. I'm posting it to the list in case anyone else is interested in
>> seeing this early version.
> 
> Last time[1] I tried to merge pages back in direct mapping it lead to
> substantial performance regression for some workloads. I cannot find the
> report right now, but I remember it was something graphics related.
>

Do you happen to remember the workload name? I can try running it.

> Have you done any performance evaluation?
> 

Not yet, I was mostly concentrating on correctness. In addition to the
graphics workload, do you have anything else in mind that may be
sensitive to such a change?

I think maybe I can run patch4's mode0 test with and without this merge
functionality and see how performance would change since mode0 is
essentially doing various set_memory_X() calls on different cpus
simultaneously which can trigger a lot of splits and merges. Sounds good?

> My take away was that the merge has to be batched. Like log where changes
> to direct mapping happens and come back to then and merge when the number
> of changes cross some limit.
> 

Appreciate your suggestion.

> Also I don't see you handling set_memory_4k(). Huh?
>

Ah Right, I missed that. Currently set_memory_4k() is not specially
handled and can be mistakenly merged. Will fix this in later versions.

> [1] https://lore.kernel.org/lkml/20200416213229.19174-1-kirill.shutemov@linux.intel.com/
> 

Thanks!
Kirill A. Shutemov Aug. 9, 2022, 5:56 p.m. UTC | #3
On Tue, Aug 09, 2022 at 10:58:18PM +0800, Aaron Lu wrote:
> Hi Kirill,
> 
> Thanks a lot for the feedback.
> 
> On 8/9/2022 6:04 PM, Kirill A. Shutemov wrote:
> > On Mon, Aug 08, 2022 at 10:56:45PM +0800, Aaron Lu wrote:
> >> This is an early RFC. While all reviews are welcome, reviewing this code
> >> now will be a waste of time for the x86 subsystem maintainers. I would,
> >> however, appreciate a preliminary review from the folks on the to and cc
> >> list. I'm posting it to the list in case anyone else is interested in
> >> seeing this early version.
> > 
> > Last time[1] I tried to merge pages back in direct mapping it lead to
> > substantial performance regression for some workloads. I cannot find the
> > report right now, but I remember it was something graphics related.
> >
> 
> Do you happen to remember the workload name? I can try running it.


No, sorry. As I said, I tried to find the report, but failed.
Hyeonggon Yoo Aug. 11, 2022, 4:50 a.m. UTC | #4
On Mon, Aug 08, 2022 at 10:56:45PM +0800, Aaron Lu wrote:
> This is an early RFC. While all reviews are welcome, reviewing this code
> now will be a waste of time for the x86 subsystem maintainers. I would,
> however, appreciate a preliminary review from the folks on the to and cc
> list. I'm posting it to the list in case anyone else is interested in
> seeing this early version.
> 

Hello Aaron!

+Cc Mike Rapoport, who has been same problem. [1]

There is also LPC discussion (with different approach on this problem)
[2], [4]

and performance measurement when all pages are 4K/2M. [3]

[1] https://lore.kernel.org/linux-mm/20220127085608.306306-1-rppt@kernel.org/
[2] https://www.youtube.com/watch?v=egC7ZK4pcnQ
[3] https://lpc.events/event/11/contributions/1127/attachments/922/1792/LPC21%20Direct%20map%20management%20.pdf
[4] https://lwn.net/Articles/894557/

> Dave Hansen: I need your ack before this goes to the maintainers.
> 
> Here it goes:
> 
> On x86_64, Linux has direct mapping of almost all physical memory. For
> performance reasons, this mapping is usually set as large page like 2M
> or 1G per hardware's capability with read, write and non-execute
> protection.
> 
> There are cases where some pages have to change their protection to RO
> and eXecutable, like pages that host module code or bpf prog. When these
> pages' protection are changed, the corresponding large mapping that
> cover these pages will have to be splitted into 4K first and then
> individual 4k page's protection changed accordingly, i.e. unaffected
> pages keep their original protection as RW and NX while affected pages'
> protection changed to RO and X.
> 
> There is a problem due to this split: the large mapping will remain
> splitted even after the affected pages' protection are changed back to
> RW and NX, like when the module is unloaded or bpf progs are freed.
> After system runs a long time, there can be more and more large mapping
> being splitted, causing more and more dTLB misses and overall system
> performance getting hurt[1].
> 
> For this reason, people tried some techniques to reduce the harm of
> large mapping beling splitted, like bpf_prog_pack[2] which packs
> multiple bpf progs into a single page instead of allocating and changing
> one page's protection for each bpf prog. This approach made large
> mapping split happen much fewer.
> 
> This patchset addresses this problem in another way: it merges
> splitted mappings back to a large mapping when protections of all entries
> of the splitted small mapping page table become same again, e.g. when the
> page whose protection was changed to RO+X now has its protection changed
> back to RW+NX due to reasons like module unload, bpf prog free, etc. and
> all other entries' protection are also RW+NX.
>

I tried very similar approach few months ago (for toy implementation) [5],
and the biggest obstacle to this approach was: you need to be extremely sure
that the page->nr_same_prot is ALWAYS correct.

For example, in arch/x86/include/asm/kfence.h [6], it clears and set
_PAGE_PRESENT without going through CPA, which can simply break the count.

[5] https://github.com/hygoni/linux/tree/merge-mapping-v1r3
[6] https://elixir.bootlin.com/linux/latest/source/arch/x86/include/asm/kfence.h#L56

I think we may need to hook set_pte/set_pmd/etc and use proper
synchronization primitives when changing init_mm's page table to go
further on this approach.

> One final note is, with features like bpf_prog_pack etc., there can be
> much fewer large mapping split IIUC; also, this patchset can not help
> when the page which has its protection changed keeps in use. So my take
> on this large mapping split problem is: to get the most value of keeping
> large mapping intact, features like bpf_prog_pack is important. This
> patchset can help to further reduce large mapping split when in use page
> that has special protection set finally gets released.
> 
> [1]: http://lkml.kernel.org/r/CAPhsuW4eAm9QrAxhZMJu-bmvHnjWjuw86gFZzTHRaMEaeFhAxw@mail.gmail.com
> [2]: https://lore.kernel.org/lkml/20220204185742.271030-1-song@kernel.org/
> 
> Aaron Lu (4):
>   x86/mm/cpa: restore global bit when page is present
>   x86/mm/cpa: merge splitted direct mapping when possible
>   x86/mm/cpa: add merge event counter
>   x86/mm/cpa: add a test interface to split direct map
> 
>  arch/x86/mm/pat/set_memory.c  | 411 +++++++++++++++++++++++++++++++++-
>  include/linux/mm_types.h      |   6 +
>  include/linux/page-flags.h    |   6 +
>  include/linux/vm_event_item.h |   2 +
>  mm/vmstat.c                   |   2 +
>  5 files changed, 420 insertions(+), 7 deletions(-)
> 
> -- 
> 2.37.1
> 
>
Aaron Lu Aug. 11, 2022, 7:50 a.m. UTC | #5
On Thu, 2022-08-11 at 04:50 +0000, Hyeonggon Yoo wrote:
> On Mon, Aug 08, 2022 at 10:56:45PM +0800, Aaron Lu wrote:
> > This is an early RFC. While all reviews are welcome, reviewing this code
> > now will be a waste of time for the x86 subsystem maintainers. I would,
> > however, appreciate a preliminary review from the folks on the to and cc
> > list. I'm posting it to the list in case anyone else is interested in
> > seeing this early version.
> > 
> 
> Hello Aaron!
> 

Hi Hyeonggon,

> +Cc Mike Rapoport, who has been same problem. [1]
> 
> There is also LPC discussion (with different approach on this problem)
> [2], [4]
> 
> and performance measurement when all pages are 4K/2M. [3]
> 
> [1] https://lore.kernel.org/linux-mm/20220127085608.306306-1-rppt@kernel.org/
> [2] https://www.youtube.com/watch?v=egC7ZK4pcnQ
> [3] https://lpc.events/event/11/contributions/1127/attachments/922/1792/LPC21%20Direct%20map%20management%20.pdf
> [4] https://lwn.net/Articles/894557/
> 

Thanks a lot for these info.

> > Dave Hansen: I need your ack before this goes to the maintainers.
> > 
> > Here it goes:
> > 
> > On x86_64, Linux has direct mapping of almost all physical memory. For
> > performance reasons, this mapping is usually set as large page like 2M
> > or 1G per hardware's capability with read, write and non-execute
> > protection.
> > 
> > There are cases where some pages have to change their protection to RO
> > and eXecutable, like pages that host module code or bpf prog. When these
> > pages' protection are changed, the corresponding large mapping that
> > cover these pages will have to be splitted into 4K first and then
> > individual 4k page's protection changed accordingly, i.e. unaffected
> > pages keep their original protection as RW and NX while affected pages'
> > protection changed to RO and X.
> > 
> > There is a problem due to this split: the large mapping will remain
> > splitted even after the affected pages' protection are changed back to
> > RW and NX, like when the module is unloaded or bpf progs are freed.
> > After system runs a long time, there can be more and more large mapping
> > being splitted, causing more and more dTLB misses and overall system
> > performance getting hurt[1].
> > 
> > For this reason, people tried some techniques to reduce the harm of
> > large mapping beling splitted, like bpf_prog_pack[2] which packs
> > multiple bpf progs into a single page instead of allocating and changing
> > one page's protection for each bpf prog. This approach made large
> > mapping split happen much fewer.
> > 
> > This patchset addresses this problem in another way: it merges
> > splitted mappings back to a large mapping when protections of all entries
> > of the splitted small mapping page table become same again, e.g. when the
> > page whose protection was changed to RO+X now has its protection changed
> > back to RW+NX due to reasons like module unload, bpf prog free, etc. and
> > all other entries' protection are also RW+NX.
> > 
> 
> I tried very similar approach few months ago (for toy implementation) [5],

Cool, glad we have tried similar approach :-)

> and the biggest obstacle to this approach was: you need to be extremely sure
> that the page->nr_same_prot is ALWAYS correct.
> 

Yes indeed.

> For example, in arch/x86/include/asm/kfence.h [6], it clears and set
> _PAGE_PRESENT without going through CPA, which can simply break the count.
> 
> [5] https://github.com/hygoni/linux/tree/merge-mapping-v1r3
> [6] https://elixir.bootlin.com/linux/latest/source/arch/x86/include/asm/kfence.h#L56
> 

For this specific case, it probably doesn't matter because kfence
intentionally uses set_memory_4k() for these pages and no merge shall
ever be done for them according to commit 1dc0da6e9ec0("x86, kfence:
enable KFENCE for x86").
(Kirill pointed out my current version has problem dealing with
set_memory_4k() but that is fixable).

> I think we may need to hook set_pte/set_pmd/etc and use proper
> synchronization primitives when changing init_mm's page table to go
> further on this approach.

Thanks for the suggestion. I'll check how many callsites there are that
manipulate init_mm's page table outside of cpa() and then decides if it
is possible to do the hook and sync for set_pte/etc.

> 
> > One final note is, with features like bpf_prog_pack etc., there can be
> > much fewer large mapping split IIUC; also, this patchset can not help
> > when the page which has its protection changed keeps in use. So my take
> > on this large mapping split problem is: to get the most value of keeping
> > large mapping intact, features like bpf_prog_pack is important. This
> > patchset can help to further reduce large mapping split when in use page
> > that has special protection set finally gets released.
> > 
> > [1]: http://lkml.kernel.org/r/CAPhsuW4eAm9QrAxhZMJu-bmvHnjWjuw86gFZzTHRaMEaeFhAxw@mail.gmail.com
> > [2]: https://lore.kernel.org/lkml/20220204185742.271030-1-song@kernel.org/
> > 
> > Aaron Lu (4):
> >   x86/mm/cpa: restore global bit when page is present
> >   x86/mm/cpa: merge splitted direct mapping when possible
> >   x86/mm/cpa: add merge event counter
> >   x86/mm/cpa: add a test interface to split direct map
> > 
> >  arch/x86/mm/pat/set_memory.c  | 411 +++++++++++++++++++++++++++++++++-
> >  include/linux/mm_types.h      |   6 +
> >  include/linux/page-flags.h    |   6 +
> >  include/linux/vm_event_item.h |   2 +
> >  mm/vmstat.c                   |   2 +
> >  5 files changed, 420 insertions(+), 7 deletions(-)
> > 
> > -- 
> > 2.37.1
> > 
> >
Mike Rapoport Aug. 13, 2022, 4:05 p.m. UTC | #6
Hi Aaron,

On Thu, Aug 11, 2022 at 04:50:44AM +0000, Hyeonggon Yoo wrote:
> On Mon, Aug 08, 2022 at 10:56:45PM +0800, Aaron Lu wrote:
> > This is an early RFC. While all reviews are welcome, reviewing this code
> > now will be a waste of time for the x86 subsystem maintainers. I would,
> > however, appreciate a preliminary review from the folks on the to and cc
> > list. I'm posting it to the list in case anyone else is interested in
> > seeing this early version.
> > 
> 
> Hello Aaron!
> 
> +Cc Mike Rapoport, who has been same problem. [1]

Thanks Hyeonggon!
 
> There is also LPC discussion (with different approach on this problem)
> [2], [4]
> 
> and performance measurement when all pages are 4K/2M. [3]
> 
> [1] https://lore.kernel.org/linux-mm/20220127085608.306306-1-rppt@kernel.org/
> [2] https://www.youtube.com/watch?v=egC7ZK4pcnQ
> [3] https://lpc.events/event/11/contributions/1127/attachments/922/1792/LPC21%20Direct%20map%20management%20.pdf
> [4] https://lwn.net/Articles/894557/
> 
> > Dave Hansen: I need your ack before this goes to the maintainers.
> > 
> > Here it goes:
> > 
> > On x86_64, Linux has direct mapping of almost all physical memory. For
> > performance reasons, this mapping is usually set as large page like 2M
> > or 1G per hardware's capability with read, write and non-execute
> > protection.
> > 
> > There are cases where some pages have to change their protection to RO
> > and eXecutable, like pages that host module code or bpf prog. When these
> > pages' protection are changed, the corresponding large mapping that
> > cover these pages will have to be splitted into 4K first and then
> > individual 4k page's protection changed accordingly, i.e. unaffected
> > pages keep their original protection as RW and NX while affected pages'
> > protection changed to RO and X.
> > 
> > There is a problem due to this split: the large mapping will remain
> > splitted even after the affected pages' protection are changed back to
> > RW and NX, like when the module is unloaded or bpf progs are freed.
> > After system runs a long time, there can be more and more large mapping
> > being splitted, causing more and more dTLB misses and overall system
> > performance getting hurt[1].
> > 
> > For this reason, people tried some techniques to reduce the harm of
> > large mapping beling splitted, like bpf_prog_pack[2] which packs
> > multiple bpf progs into a single page instead of allocating and changing
> > one page's protection for each bpf prog. This approach made large
> > mapping split happen much fewer.
> > 
> > This patchset addresses this problem in another way: it merges
> > splitted mappings back to a large mapping when protections of all entries
> > of the splitted small mapping page table become same again, e.g. when the
> > page whose protection was changed to RO+X now has its protection changed
> > back to RW+NX due to reasons like module unload, bpf prog free, etc. and
> > all other entries' protection are also RW+NX.
> >
> 
> I tried very similar approach few months ago (for toy implementation) [5],
> and the biggest obstacle to this approach was: you need to be extremely sure
> that the page->nr_same_prot is ALWAYS correct.
> 
> For example, in arch/x86/include/asm/kfence.h [6], it clears and set
> _PAGE_PRESENT without going through CPA, which can simply break the count.
> 
> [5] https://github.com/hygoni/linux/tree/merge-mapping-v1r3
> [6] https://elixir.bootlin.com/linux/latest/source/arch/x86/include/asm/kfence.h#L56
> 
> I think we may need to hook set_pte/set_pmd/etc and use proper
> synchronization primitives when changing init_mm's page table to go
> further on this approach.
> 
> > One final note is, with features like bpf_prog_pack etc., there can be
> > much fewer large mapping split IIUC; also, this patchset can not help
> > when the page which has its protection changed keeps in use. So my take
> > on this large mapping split problem is: to get the most value of keeping
> > large mapping intact, features like bpf_prog_pack is important. This
> > patchset can help to further reduce large mapping split when in use page
> > that has special protection set finally gets released.

I'm not sure automatic collapse of large pages in the direct map will
actually trigger frequently. 

Consider for example pages allocated for modules, that have adjusted
protection bits. This pages could be scattered all over and even when they
are freed, chances there will be a contiguous 2M chunk are quite low...

I believe that to reduce the fragmentation of the direct map the 4K pages
with changed protection should be allocated from a cache of large pages, as
I did on older version of secretmem or as Rick implemented in his vmalloc
and PKS series.

Then CPA may provide a method for explicitly collapsing a large page, so
that such cache can call this method when an entire large page becomes
free.

> > [1]: http://lkml.kernel.org/r/CAPhsuW4eAm9QrAxhZMJu-bmvHnjWjuw86gFZzTHRaMEaeFhAxw@mail.gmail.com
> > [2]: https://lore.kernel.org/lkml/20220204185742.271030-1-song@kernel.org/
> > 
> > Aaron Lu (4):
> >   x86/mm/cpa: restore global bit when page is present
> >   x86/mm/cpa: merge splitted direct mapping when possible
> >   x86/mm/cpa: add merge event counter
> >   x86/mm/cpa: add a test interface to split direct map
> > 
> >  arch/x86/mm/pat/set_memory.c  | 411 +++++++++++++++++++++++++++++++++-
> >  include/linux/mm_types.h      |   6 +
> >  include/linux/page-flags.h    |   6 +
> >  include/linux/vm_event_item.h |   2 +
> >  mm/vmstat.c                   |   2 +
> >  5 files changed, 420 insertions(+), 7 deletions(-)
> > 
> > -- 
> > 2.37.1
> > 
> >
Aaron Lu Aug. 16, 2022, 6:33 a.m. UTC | #7
Hi Mike,

Thanks for the feedback. See below for my comments.

On 8/14/2022 12:05 AM, Mike Rapoport wrote:
> Hi Aaron,
> 
> On Thu, Aug 11, 2022 at 04:50:44AM +0000, Hyeonggon Yoo wrote:
>> On Mon, Aug 08, 2022 at 10:56:45PM +0800, Aaron Lu wrote:
>>> This is an early RFC. While all reviews are welcome, reviewing this code
>>> now will be a waste of time for the x86 subsystem maintainers. I would,
>>> however, appreciate a preliminary review from the folks on the to and cc
>>> list. I'm posting it to the list in case anyone else is interested in
>>> seeing this early version.
>>>
>>
>> Hello Aaron!
>>
>> +Cc Mike Rapoport, who has been same problem. [1]
> 
> Thanks Hyeonggon!
>  
>> There is also LPC discussion (with different approach on this problem)
>> [2], [4]
>>
>> and performance measurement when all pages are 4K/2M. [3]
>>
>> [1] https://lore.kernel.org/linux-mm/20220127085608.306306-1-rppt@kernel.org/
>> [2] https://www.youtube.com/watch?v=egC7ZK4pcnQ
>> [3] https://lpc.events/event/11/contributions/1127/attachments/922/1792/LPC21%20Direct%20map%20management%20.pdf
>> [4] https://lwn.net/Articles/894557/
>>
>>> Dave Hansen: I need your ack before this goes to the maintainers.
>>>
>>> Here it goes:
>>>
>>> On x86_64, Linux has direct mapping of almost all physical memory. For
>>> performance reasons, this mapping is usually set as large page like 2M
>>> or 1G per hardware's capability with read, write and non-execute
>>> protection.
>>>
>>> There are cases where some pages have to change their protection to RO
>>> and eXecutable, like pages that host module code or bpf prog. When these
>>> pages' protection are changed, the corresponding large mapping that
>>> cover these pages will have to be splitted into 4K first and then
>>> individual 4k page's protection changed accordingly, i.e. unaffected
>>> pages keep their original protection as RW and NX while affected pages'
>>> protection changed to RO and X.
>>>
>>> There is a problem due to this split: the large mapping will remain
>>> splitted even after the affected pages' protection are changed back to
>>> RW and NX, like when the module is unloaded or bpf progs are freed.
>>> After system runs a long time, there can be more and more large mapping
>>> being splitted, causing more and more dTLB misses and overall system
>>> performance getting hurt[1].
>>>
>>> For this reason, people tried some techniques to reduce the harm of
>>> large mapping beling splitted, like bpf_prog_pack[2] which packs
>>> multiple bpf progs into a single page instead of allocating and changing
>>> one page's protection for each bpf prog. This approach made large
>>> mapping split happen much fewer.
>>>
>>> This patchset addresses this problem in another way: it merges
>>> splitted mappings back to a large mapping when protections of all entries
>>> of the splitted small mapping page table become same again, e.g. when the
>>> page whose protection was changed to RO+X now has its protection changed
>>> back to RW+NX due to reasons like module unload, bpf prog free, etc. and
>>> all other entries' protection are also RW+NX.
>>>
>>
>> I tried very similar approach few months ago (for toy implementation) [5],
>> and the biggest obstacle to this approach was: you need to be extremely sure
>> that the page->nr_same_prot is ALWAYS correct.
>>
>> For example, in arch/x86/include/asm/kfence.h [6], it clears and set
>> _PAGE_PRESENT without going through CPA, which can simply break the count.
>>
>> [5] https://github.com/hygoni/linux/tree/merge-mapping-v1r3
>> [6] https://elixir.bootlin.com/linux/latest/source/arch/x86/include/asm/kfence.h#L56
>>
>> I think we may need to hook set_pte/set_pmd/etc and use proper
>> synchronization primitives when changing init_mm's page table to go
>> further on this approach.
>>
>>> One final note is, with features like bpf_prog_pack etc., there can be
>>> much fewer large mapping split IIUC; also, this patchset can not help
>>> when the page which has its protection changed keeps in use. So my take
>>> on this large mapping split problem is: to get the most value of keeping
>>> large mapping intact, features like bpf_prog_pack is important. This
>>> patchset can help to further reduce large mapping split when in use page
>>> that has special protection set finally gets released.
> 
> I'm not sure automatic collapse of large pages in the direct map will
> actually trigger frequently. 
> 
> Consider for example pages allocated for modules, that have adjusted
> protection bits. This pages could be scattered all over and even when they
> are freed, chances there will be a contiguous 2M chunk are quite low...
> 

When these pages that scattered a 2M chunk with special protection bits
set are all freed, then we can do merge for them. I suppose you mean
it's not easy to have all these special pages freed?

> I believe that to reduce the fragmentation of the direct map the 4K pages
> with changed protection should be allocated from a cache of large pages, as
> I did on older version of secretmem or as Rick implemented in his vmalloc
> and PKS series.

I agree that allocation side is important to reduce direct map
fragmentation. This approach here doesn't help when these special pages
are in use while the approaches you mentioned can help that.

> 
> Then CPA may provide a method for explicitly collapsing a large page, so
> that such cache can call this method when an entire large page becomes
> free.

I think this is a good idea. With things like your Unmap migratetype
patchset, when a order-9 page is free, it can somehow notify CPA about
this and then arch code like CPA can manipulate direct mapping as it
sees appropriate, like merging lower level page tables to higher level.

This also saves the trouble of tracking pgt->same_prot and nr_same_prot
of the kernel page tables in this patchset. CPA now only need to get
notified and then do a page table scan to make sure such a merge is correct.

I suppose this should work as long as all pages that will have
protection bits changed are allocated from the page allocator(so that
your approach can track such pages).

> 
>>> [1]: http://lkml.kernel.org/r/CAPhsuW4eAm9QrAxhZMJu-bmvHnjWjuw86gFZzTHRaMEaeFhAxw@mail.gmail.com
>>> [2]: https://lore.kernel.org/lkml/20220204185742.271030-1-song@kernel.org/
>>>
>>> Aaron Lu (4):
>>>   x86/mm/cpa: restore global bit when page is present
>>>   x86/mm/cpa: merge splitted direct mapping when possible
>>>   x86/mm/cpa: add merge event counter
>>>   x86/mm/cpa: add a test interface to split direct map
>>>
>>>  arch/x86/mm/pat/set_memory.c  | 411 +++++++++++++++++++++++++++++++++-
>>>  include/linux/mm_types.h      |   6 +
>>>  include/linux/page-flags.h    |   6 +
>>>  include/linux/vm_event_item.h |   2 +
>>>  mm/vmstat.c                   |   2 +
>>>  5 files changed, 420 insertions(+), 7 deletions(-)
>>>
>>> -- 
>>> 2.37.1
>>>
>>>
>