mbox series

[v2,0/9] Free user PTE page table pages

Message ID 20210819031858.98043-1-zhengqi.arch@bytedance.com (mailing list archive)
Headers show
Series Free user PTE page table pages | expand

Message

Qi Zheng Aug. 19, 2021, 3:18 a.m. UTC
Hi,

This patch series aims to free user PTE page table pages when all PTE entries
are empty.

The beginning of this story is that some malloc libraries(e.g. jemalloc or
tcmalloc) usually allocate the amount of VAs by mmap() and do not unmap those VAs.
They will use madvise(MADV_DONTNEED) to free physical memory if they want.
But the page tables do not be freed by madvise(), so it can produce many
page tables when the process touches an enormous virtual address space.

The following figures are a memory usage snapshot of one process which actually
happened on our server:

	VIRT:  55t
	RES:   590g
	VmPTE: 110g

As we can see, the PTE page tables size is 110g, while the RES is 590g. In
theory, the process only need 1.2g PTE page tables to map those physical
memory. The reason why PTE page tables occupy a lot of memory is that
madvise(MADV_DONTNEED) only empty the PTE and free physical memory but
doesn't free the PTE page table pages. So we can free those empty PTE page
tables to save memory. In the above cases, we can save memory about 108g(best
case). And the larger the difference between the size of VIRT and RES, the
more memory we save.

In this patch series, we add a pte_refcount field to the struct page of page
table to track how many users of PTE page table. Similar to the mechanism of
page refcount, the user of PTE page table should hold a refcount to it before
accessing. The PTE page table page will be freed when the last refcount is
dropped.

Testing:

The following code snippet can show the effect of optimization:

	mmap 50G
	while (1) {
		for (; i < 1024 * 25; i++) {
			touch 2M memory
			madvise MADV_DONTNEED 2M
		}
	}

As we can see, the memory usage of VmPTE is reduced:

			before		                after
VIRT		       50.0 GB			      50.0 GB
RES		        3.1 MB			       3.6 MB
VmPTE		     102640 kB			       248 kB

I also have tested the stability by LTP[1] for several weeks. I have not seen
any crash so far.

The performance of page fault can be affected because of the allocation/freeing
of PTE page table pages. The following is the test result by using a micro
benchmark[2]:

root@~# perf stat -e page-faults --repeat 5 ./multi-fault $threads:

threads         before (pf/min)                     after (pf/min)
    1                32,085,255                         31,880,833 (-0.64%)
    8               101,674,967                        100,588,311 (-1.17%)
   16               113,207,000                        112,801,832 (-0.36%)

(The "pfn/min" means how many page faults in one minute.)

The performance of page fault is ~1% slower than before.

This series is based on next-20210812.

Comments and suggestions are welcome.

Thanks,
Qi.

[1] https://github.com/linux-test-project/ltp
[2] https://lore.kernel.org/patchwork/comment/296794/

Changelog in v1 -> v2:
 - Change pte_install() to pmd_install().
 - Fix some typo and code style problems.
 - Split [PATCH v1 5/7] into [PATCH v2 4/9], [PATCH v2 5/9],[PATCH v2 6/9]
   and [PATCH v2 7/9].

Qi Zheng (9):
  mm: introduce pmd_install() helper
  mm: remove redundant smp_wmb()
  mm: rework the parameter of lock_page_or_retry()
  mm: move pte_alloc{,_map,_map_lock}() to a separate file
  mm: pte_refcount infrastructure
  mm: free user PTE page table pages
  mm: add THP support for pte_ref
  mm: free PTE page table by using rcu mechanism
  mm: use mmu_gather to free PTE page table

 arch/arm/mm/pgd.c             |   1 +
 arch/arm64/mm/hugetlbpage.c   |   1 +
 arch/ia64/mm/hugetlbpage.c    |   1 +
 arch/parisc/mm/hugetlbpage.c  |   1 +
 arch/powerpc/mm/hugetlbpage.c |   1 +
 arch/s390/mm/gmap.c           |   1 +
 arch/s390/mm/pgtable.c        |   1 +
 arch/sh/mm/hugetlbpage.c      |   1 +
 arch/sparc/mm/hugetlbpage.c   |   1 +
 arch/x86/Kconfig              |   2 +-
 fs/proc/task_mmu.c            |  22 +++-
 fs/userfaultfd.c              |   2 +
 include/linux/mm.h            |  12 +-
 include/linux/mm_types.h      |   8 +-
 include/linux/pagemap.h       |   8 +-
 include/linux/pgtable.h       |   3 +-
 include/linux/pte_ref.h       | 300 ++++++++++++++++++++++++++++++++++++++++++
 include/linux/rmap.h          |   4 +-
 kernel/events/uprobes.c       |   3 +
 mm/Kconfig                    |   4 +
 mm/Makefile                   |   3 +-
 mm/filemap.c                  |  57 ++++----
 mm/gup.c                      |   7 +
 mm/hmm.c                      |   4 +
 mm/internal.h                 |   2 +
 mm/khugepaged.c               |   9 ++
 mm/ksm.c                      |   4 +
 mm/madvise.c                  |  18 ++-
 mm/memcontrol.c               |  11 +-
 mm/memory.c                   | 276 ++++++++++++++++++++++++--------------
 mm/mempolicy.c                |   4 +-
 mm/migrate.c                  |  18 +--
 mm/mincore.c                  |   5 +-
 mm/mlock.c                    |   1 +
 mm/mmu_gather.c               |  40 +++---
 mm/mprotect.c                 |  10 +-
 mm/mremap.c                   |  12 +-
 mm/page_vma_mapped.c          |   4 +
 mm/pagewalk.c                 |  16 ++-
 mm/pgtable-generic.c          |   2 +
 mm/pte_ref.c                  | 143 ++++++++++++++++++++
 mm/rmap.c                     |  13 ++
 mm/sparse-vmemmap.c           |   2 +-
 mm/swapfile.c                 |   3 +-
 mm/userfaultfd.c              |  18 ++-
 45 files changed, 849 insertions(+), 210 deletions(-)
 create mode 100644 include/linux/pte_ref.h
 create mode 100644 mm/pte_ref.c

Comments

David Hildenbrand Sept. 1, 2021, 12:32 p.m. UTC | #1
On 19.08.21 05:18, Qi Zheng wrote:
> Hi,
> 
> This patch series aims to free user PTE page table pages when all PTE entries
> are empty.
> 
> The beginning of this story is that some malloc libraries(e.g. jemalloc or
> tcmalloc) usually allocate the amount of VAs by mmap() and do not unmap those VAs.
> They will use madvise(MADV_DONTNEED) to free physical memory if they want.
> But the page tables do not be freed by madvise(), so it can produce many
> page tables when the process touches an enormous virtual address space.
> 
> The following figures are a memory usage snapshot of one process which actually
> happened on our server:
> 
> 	VIRT:  55t
> 	RES:   590g
> 	VmPTE: 110g
> 
> As we can see, the PTE page tables size is 110g, while the RES is 590g. In
> theory, the process only need 1.2g PTE page tables to map those physical
> memory. The reason why PTE page tables occupy a lot of memory is that
> madvise(MADV_DONTNEED) only empty the PTE and free physical memory but
> doesn't free the PTE page table pages. So we can free those empty PTE page
> tables to save memory. In the above cases, we can save memory about 108g(best
> case). And the larger the difference between the size of VIRT and RES, the
> more memory we save.
> 
> In this patch series, we add a pte_refcount field to the struct page of page
> table to track how many users of PTE page table. Similar to the mechanism of
> page refcount, the user of PTE page table should hold a refcount to it before
> accessing. The PTE page table page will be freed when the last refcount is
> dropped.
> 
> Testing:
> 
> The following code snippet can show the effect of optimization:
> 
> 	mmap 50G
> 	while (1) {
> 		for (; i < 1024 * 25; i++) {
> 			touch 2M memory
> 			madvise MADV_DONTNEED 2M
> 		}
> 	}
> 
> As we can see, the memory usage of VmPTE is reduced:
> 
> 			before		                after
> VIRT		       50.0 GB			      50.0 GB
> RES		        3.1 MB			       3.6 MB
> VmPTE		     102640 kB			       248 kB
> 
> I also have tested the stability by LTP[1] for several weeks. I have not seen
> any crash so far.
> 
> The performance of page fault can be affected because of the allocation/freeing
> of PTE page table pages. The following is the test result by using a micro
> benchmark[2]:
> 
> root@~# perf stat -e page-faults --repeat 5 ./multi-fault $threads:
> 
> threads         before (pf/min)                     after (pf/min)
>      1                32,085,255                         31,880,833 (-0.64%)
>      8               101,674,967                        100,588,311 (-1.17%)
>     16               113,207,000                        112,801,832 (-0.36%)
> 
> (The "pfn/min" means how many page faults in one minute.)
> 
> The performance of page fault is ~1% slower than before.
> 
> This series is based on next-20210812.
> 
> Comments and suggestions are welcome.
> 
> Thanks,
> Qi.
> 


Some high-level feedback after studying the code:

1. Try introducing the new dummy primitives ("API") first, and then 
convert each subsystem individually; especially, maybe convert the whole 
pagefault handling in a single patch, because it's far from trivial. 
This will make this series much easier to digest.

Then, have a patch that adds actual logic to the dummy primitives via a 
config option.

2. Minimize the API.

a) pte_alloc_get{,_map,_map_lock}() is really ugly. Maybe restrict it to 
pte_alloc_get().

b) pmd_trans_unstable_or_pte_try_get() and friends are really ugly.

Handle it independently for now, even if it implies duplicate runtime 
checks.

if (pmd_trans_unstable() || !pte_try_get()) ...

We can always optimize later, once we can come up with something cleaner.

3. Merge #6, and #7, after factoring out all changes to other subsystems 
to use the API

4. Merge #8 into #6. There is a lot of unnecessary code churn back and 
forth, and IMHO the whole approach might not make sense without RCU due 
to the additional locking overhead.

Or at least, try to not modify the API you introduced in patch #6 or #7 
in #8 again. Converting all call sites back and forth just makes review 
quite hard.


I am preparing some some cleanups that will make get_locked_pte() and 
similar a little nicer to handle. I'll send them out this or next week.
Jason Gunthorpe Sept. 1, 2021, 4:07 p.m. UTC | #2
On Wed, Sep 01, 2021 at 02:32:08PM +0200, David Hildenbrand wrote:
 
> b) pmd_trans_unstable_or_pte_try_get() and friends are really ugly.

I suspect the good API here is really more like:

  ptep = pte_try_map(pmdp, &pmd_value)
  if (!ptep) {
     // pmd_value is guarenteed to not be a PTE table pointer.
     if (pmd_XXX(pmd_value))
  }

Ie the core code will do whatever stuff, including the THP data race
avoidance, to either return the next level page table or the value of
a pmd that is not a enxt level page table. Callers are much clearer in
this way.

Eg this is a fairly representative sample user:

static int smaps_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
			   struct mm_walk *walk)
{
	if (pmd_trans_unstable(pmd))
		goto out;
	pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);

And it is obviously pretty easy to integrate any refcount into
pte_try_map and pte_unmap as in my other email.

Jason
David Hildenbrand Sept. 1, 2021, 4:10 p.m. UTC | #3
On 01.09.21 18:07, Jason Gunthorpe wrote:
> On Wed, Sep 01, 2021 at 02:32:08PM +0200, David Hildenbrand wrote:
>   
>> b) pmd_trans_unstable_or_pte_try_get() and friends are really ugly.
> 
> I suspect the good API here is really more like:

That was my exactly my first idea and I tried to rework the code for 
roughly 2 days and failed.

Especially in pagefault logic, we temporarily unmap/unlock to map/lock 
again later and don't want the page table to just vanish.

I think I met similar cases when allocating a page table and not wanting 
it to vanish and not wanting to map/lock it. But I don't recall all the 
corner cases: it didn't work for me.

> 
>    ptep = pte_try_map(pmdp, &pmd_value)
>    if (!ptep) {
>       // pmd_value is guarenteed to not be a PTE table pointer.
>       if (pmd_XXX(pmd_value))
>    }
> 
> Ie the core code will do whatever stuff, including the THP data race
> avoidance, to either return the next level page table or the value of
> a pmd that is not a enxt level page table. Callers are much clearer in
> this way.
> 
> Eg this is a fairly representative sample user:
> 
> static int smaps_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
> 			   struct mm_walk *walk)
> {
> 	if (pmd_trans_unstable(pmd))
> 		goto out;
> 	pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
> 
> And it is obviously pretty easy to integrate any refcount into
> pte_try_map and pte_unmap as in my other email.

It didn't work when I tried.
Qi Zheng Sept. 2, 2021, 3:37 a.m. UTC | #4
On 2021/9/1 PM8:32, David Hildenbrand wrote:
> On 19.08.21 05:18, Qi Zheng wrote:
>> Hi,
>>
>> This patch series aims to free user PTE page table pages when all PTE 
>> entries
>> are empty.
>>
>> The beginning of this story is that some malloc libraries(e.g. 
>> jemalloc or
>> tcmalloc) usually allocate the amount of VAs by mmap() and do not 
>> unmap those VAs.
>> They will use madvise(MADV_DONTNEED) to free physical memory if they 
>> want.
>> But the page tables do not be freed by madvise(), so it can produce many
>> page tables when the process touches an enormous virtual address space.
>>
>> The following figures are a memory usage snapshot of one process which 
>> actually
>> happened on our server:
>>
>>     VIRT:  55t
>>     RES:   590g
>>     VmPTE: 110g
>>
>> As we can see, the PTE page tables size is 110g, while the RES is 
>> 590g. In
>> theory, the process only need 1.2g PTE page tables to map those physical
>> memory. The reason why PTE page tables occupy a lot of memory is that
>> madvise(MADV_DONTNEED) only empty the PTE and free physical memory but
>> doesn't free the PTE page table pages. So we can free those empty PTE 
>> page
>> tables to save memory. In the above cases, we can save memory about 
>> 108g(best
>> case). And the larger the difference between the size of VIRT and RES, 
>> the
>> more memory we save.
>>
>> In this patch series, we add a pte_refcount field to the struct page 
>> of page
>> table to track how many users of PTE page table. Similar to the 
>> mechanism of
>> page refcount, the user of PTE page table should hold a refcount to it 
>> before
>> accessing. The PTE page table page will be freed when the last 
>> refcount is
>> dropped.
>>
>> Testing:
>>
>> The following code snippet can show the effect of optimization:
>>
>>     mmap 50G
>>     while (1) {
>>         for (; i < 1024 * 25; i++) {
>>             touch 2M memory
>>             madvise MADV_DONTNEED 2M
>>         }
>>     }
>>
>> As we can see, the memory usage of VmPTE is reduced:
>>
>>             before                        after
>> VIRT               50.0 GB                  50.0 GB
>> RES                3.1 MB                   3.6 MB
>> VmPTE             102640 kB                   248 kB
>>
>> I also have tested the stability by LTP[1] for several weeks. I have 
>> not seen
>> any crash so far.
>>
>> The performance of page fault can be affected because of the 
>> allocation/freeing
>> of PTE page table pages. The following is the test result by using a 
>> micro
>> benchmark[2]:
>>
>> root@~# perf stat -e page-faults --repeat 5 ./multi-fault $threads:
>>
>> threads         before (pf/min)                     after (pf/min)
>>      1                32,085,255                         31,880,833 
>> (-0.64%)
>>      8               101,674,967                        100,588,311 
>> (-1.17%)
>>     16               113,207,000                        112,801,832 
>> (-0.36%)
>>
>> (The "pfn/min" means how many page faults in one minute.)
>>
>> The performance of page fault is ~1% slower than before.
>>
>> This series is based on next-20210812.
>>
>> Comments and suggestions are welcome.
>>
>> Thanks,
>> Qi.
>>
> 
> 
> Some high-level feedback after studying the code:
> 
> 1. Try introducing the new dummy primitives ("API") first, and then 
> convert each subsystem individually; especially, maybe convert the whole 
> pagefault handling in a single patch, because it's far from trivial. 
> This will make this series much easier to digest.
> 
> Then, have a patch that adds actual logic to the dummy primitives via a 
> config option.
> 
> 2. Minimize the API.
> 
> a) pte_alloc_get{,_map,_map_lock}() is really ugly. Maybe restrict it to 
> pte_alloc_get().
> 
> b) pmd_trans_unstable_or_pte_try_get() and friends are really ugly.
> 
> Handle it independently for now, even if it implies duplicate runtime 
> checks.
> 
> if (pmd_trans_unstable() || !pte_try_get()) ...
> 
> We can always optimize later, once we can come up with something cleaner.
> 
> 3. Merge #6, and #7, after factoring out all changes to other subsystems 
> to use the API
> 
> 4. Merge #8 into #6. There is a lot of unnecessary code churn back and 
> forth, and IMHO the whole approach might not make sense without RCU due 
> to the additional locking overhead.
> 
> Or at least, try to not modify the API you introduced in patch #6 or #7 
> in #8 again. Converting all call sites back and forth just makes review 
> quite hard.
> 

Very detailed feedback! Thank you very much for your time and energy,
I will seriously adopt and implement these modification suggestions.

> 
> I am preparing some some cleanups that will make get_locked_pte() and 
> similar a little nicer to handle. I'll send them out this or next week.
> 

Yup, we just simply convert the pte_alloc_map_lock() in
__get_locked_pte() to pte_alloc_get_map_lock(), and then
call the paired pte_put() in the caller of get_locked_pte().
Like the following pattern:

	insert_page
	--> get_locked_pte
	    --> __get_locked_pte
		--> pte_alloc_get_map_lock

	    "do some things"

	    pte_put

This is really ugly and hard to review.

I look forward to your cleanups, besides, can you outline your approach?

Thanks again,
Qi
Qi Zheng Sept. 15, 2021, 2:52 p.m. UTC | #5
On 9/1/21 8:32 PM, David Hildenbrand wrote:

Hi David,

>>
> 
> 
> Some high-level feedback after studying the code:
> 
> 1. Try introducing the new dummy primitives ("API") first, and then 
> convert each subsystem individually; especially, maybe convert the whole 
> pagefault handling in a single patch, because it's far from trivial. 
> This will make this series much easier to digest.

I am going to split this patch series as follows:

1. Introduce the new dummy APIs, which is an empty implementation.
    But I will explain its semantics.
2. Merge #6, #7 and #8, and call these dummy APIs in any necessary
    location, and split some special cases into single patches, such as
    pagefault and gup, etc. So that we can explain in more detail the
    concurrency in these cases. For example, we don't need to hold any
    pte_refcount in the fast path in gup on the x86_64 platform. Because
    the PTE page can't be freed after the local CPU interrupt is closed
    in the fast path in gup.
3. Introduce CONFIG_FREE_USER_PTE and implement these empty dummy APIs.
4. Add a description document.

And I try to add a function that combines pte_offset_map() and
pte_try_get(). Maybe the func name is pte_try_map() recommended by
Jason, or keep the pte_offset_map() unchanged?

> 
> Then, have a patch that adds actual logic to the dummy primitives via a 
> config option.
> 
> 2. Minimize the API.
> 
> a) pte_alloc_get{,_map,_map_lock}() is really ugly. Maybe restrict it to 
> pte_alloc_get()
I also think pte_alloc_get{,_map,_map_lock}() is ugly, but I can't
figure out a more suitable name. Maybe we can keep the
pte_alloc{,_map,_map_lock}() without any modification? But I am
worried that the caller will forget to call the paired pte_put().

> 
> b) pmd_trans_unstable_or_pte_try_get() and friends are really ugly.
> 
> Handle it independently for now, even if it implies duplicate runtime 
> checks.
> 
> if (pmd_trans_unstable() || !pte_try_get()) ...
> 
> We can always optimize later, once we can come up with something cleaner.
> 
> 3. Merge #6, and #7, after factoring out all changes to other subsystems 
> to use the API
> 
> 4. Merge #8 into #6. There is a lot of unnecessary code churn back and 
> forth, and IMHO the whole approach might not make sense without RCU due 
> to the additional locking overhead.
> 
> Or at least, try to not modify the API you introduced in patch #6 or #7 
> in #8 again. Converting all call sites back and forth just makes review 
> quite hard.
> 
> 
> I am preparing some some cleanups that will make get_locked_pte() and 
> similar a little nicer to handle. I'll send them out this or next week.
> 
Thanks,
Qi
Jason Gunthorpe Sept. 15, 2021, 2:59 p.m. UTC | #6
On Wed, Sep 15, 2021 at 10:52:40PM +0800, Qi Zheng wrote:
> I am going to split this patch series as follows:
> 
> 1. Introduce the new dummy APIs, which is an empty implementation.
>    But I will explain its semantics.
> 2. Merge #6, #7 and #8, and call these dummy APIs in any necessary
>    location, and split some special cases into single patches, such as
>    pagefault and gup, etc. So that we can explain in more detail the
>    concurrency in these cases. For example, we don't need to hold any
>    pte_refcount in the fast path in gup on the x86_64 platform. Because
>    the PTE page can't be freed after the local CPU interrupt is closed
>    in the fast path in gup.
> 3. Introduce CONFIG_FREE_USER_PTE and implement these empty dummy APIs.
> 4. Add a description document.
> 
> And I try to add a function that combines pte_offset_map() and
> pte_try_get(). Maybe the func name is pte_try_map() recommended by
> Jason, or keep the pte_offset_map() unchanged?

It is part of the transformation, add a
pte_try_map()/pte_undo_try_map() and replace all the pte_offset_map()
callsites that can use the new API with it. The idea was that try_map
would incorporate the pmd_trans_unstable/etc mess so searching for
trans_unstable is a good place to start finding candidates. Some are
simple, some are tricky.

When you get to step 3 you just change pte_try_map() and the callsites
don't need changing.

Jason
Qi Zheng Sept. 16, 2021, 5:32 a.m. UTC | #7
On 9/15/21 10:59 PM, Jason Gunthorpe wrote:
> On Wed, Sep 15, 2021 at 10:52:40PM +0800, Qi Zheng wrote:
>> I am going to split this patch series as follows:
>>
>> 1. Introduce the new dummy APIs, which is an empty implementation.
>>     But I will explain its semantics.
>> 2. Merge #6, #7 and #8, and call these dummy APIs in any necessary
>>     location, and split some special cases into single patches, such as
>>     pagefault and gup, etc. So that we can explain in more detail the
>>     concurrency in these cases. For example, we don't need to hold any
>>     pte_refcount in the fast path in gup on the x86_64 platform. Because
>>     the PTE page can't be freed after the local CPU interrupt is closed
>>     in the fast path in gup.
>> 3. Introduce CONFIG_FREE_USER_PTE and implement these empty dummy APIs.
>> 4. Add a description document.
>>
>> And I try to add a function that combines pte_offset_map() and
>> pte_try_get(). Maybe the func name is pte_try_map() recommended by
>> Jason, or keep the pte_offset_map() unchanged?
> 
> It is part of the transformation, add a
> pte_try_map()/pte_undo_try_map() and replace all the pte_offset_map()
> callsites that can use the new API with it. The idea was that try_map
> would incorporate the pmd_trans_unstable/etc mess so searching for
> trans_unstable is a good place to start finding candidates. Some are
> simple, some are tricky.

Yes, I will search pte_offset_map()/pmd_trans_unstable/etc, and then 
analyze the specific situation.

> 
> When you get to step 3 you just change pte_try_map() and the callsites
> don't need changing.
> 
> Jason
> 

Thanks,
Qi
David Hildenbrand Sept. 16, 2021, 8:30 a.m. UTC | #8
On 16.09.21 07:32, Qi Zheng wrote:
> 
> 
> On 9/15/21 10:59 PM, Jason Gunthorpe wrote:
>> On Wed, Sep 15, 2021 at 10:52:40PM +0800, Qi Zheng wrote:
>>> I am going to split this patch series as follows:
>>>
>>> 1. Introduce the new dummy APIs, which is an empty implementation.
>>>      But I will explain its semantics.
>>> 2. Merge #6, #7 and #8, and call these dummy APIs in any necessary
>>>      location, and split some special cases into single patches, such as
>>>      pagefault and gup, etc. So that we can explain in more detail the
>>>      concurrency in these cases. For example, we don't need to hold any
>>>      pte_refcount in the fast path in gup on the x86_64 platform. Because
>>>      the PTE page can't be freed after the local CPU interrupt is closed
>>>      in the fast path in gup.
>>> 3. Introduce CONFIG_FREE_USER_PTE and implement these empty dummy APIs.
>>> 4. Add a description document.
>>>
>>> And I try to add a function that combines pte_offset_map() and
>>> pte_try_get(). Maybe the func name is pte_try_map() recommended by
>>> Jason, or keep the pte_offset_map() unchanged?
>>
>> It is part of the transformation, add a
>> pte_try_map()/pte_undo_try_map() and replace all the pte_offset_map()
>> callsites that can use the new API with it. The idea was that try_map
>> would incorporate the pmd_trans_unstable/etc mess so searching for
>> trans_unstable is a good place to start finding candidates. Some are
>> simple, some are tricky.
> 
> Yes, I will search pte_offset_map()/pmd_trans_unstable/etc, and then
> analyze the specific situation.

Maybe propose the new API first, before doing the actual implementation. 
Might safe you from doing some additional back-and-forth
work eventually.
Qi Zheng Sept. 16, 2021, 8:41 a.m. UTC | #9
On 9/16/21 4:30 PM, David Hildenbrand wrote:
> On 16.09.21 07:32, Qi Zheng wrote:
>>
>>
>> On 9/15/21 10:59 PM, Jason Gunthorpe wrote:
>>> On Wed, Sep 15, 2021 at 10:52:40PM +0800, Qi Zheng wrote:
>>>> I am going to split this patch series as follows:
>>>>
>>>> 1. Introduce the new dummy APIs, which is an empty implementation.
>>>>      But I will explain its semantics.
>>>> 2. Merge #6, #7 and #8, and call these dummy APIs in any necessary
>>>>      location, and split some special cases into single patches, 
>>>> such as
>>>>      pagefault and gup, etc. So that we can explain in more detail the
>>>>      concurrency in these cases. For example, we don't need to hold any
>>>>      pte_refcount in the fast path in gup on the x86_64 platform. 
>>>> Because
>>>>      the PTE page can't be freed after the local CPU interrupt is 
>>>> closed
>>>>      in the fast path in gup.
>>>> 3. Introduce CONFIG_FREE_USER_PTE and implement these empty dummy APIs.
>>>> 4. Add a description document.
>>>>
>>>> And I try to add a function that combines pte_offset_map() and
>>>> pte_try_get(). Maybe the func name is pte_try_map() recommended by
>>>> Jason, or keep the pte_offset_map() unchanged?
>>>
>>> It is part of the transformation, add a
>>> pte_try_map()/pte_undo_try_map() and replace all the pte_offset_map()
>>> callsites that can use the new API with it. The idea was that try_map
>>> would incorporate the pmd_trans_unstable/etc mess so searching for
>>> trans_unstable is a good place to start finding candidates. Some are
>>> simple, some are tricky.
>>
>> Yes, I will search pte_offset_map()/pmd_trans_unstable/etc, and then
>> analyze the specific situation.
> 
> Maybe propose the new API first, before doing the actual implementation. 
> Might safe you from doing some additional back-and-forth
> work eventually.
> 

OK, I prepare to propose all the dummy APIs in the step 1.

Thanks,
Qi

>