Message ID | 88c445ae-552-5243-31a4-2674bac62d4d@google.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm: free retracted page table by RCU | expand |
On Mon, May 29, 2023 at 8:15 AM Hugh Dickins <hughd@google.com> wrote: > Before putting them to use (several commits later), add rcu_read_lock() > to pte_offset_map(), and rcu_read_unlock() to pte_unmap(). Make this a > separate commit, since it risks exposing imbalances: prior commits have > fixed all the known imbalances, but we may find some have been missed. [...] > diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c > index c7ab18a5fb77..674671835631 100644 > --- a/mm/pgtable-generic.c > +++ b/mm/pgtable-generic.c > @@ -236,7 +236,7 @@ pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp) > { > pmd_t pmdval; > > - /* rcu_read_lock() to be added later */ > + rcu_read_lock(); > pmdval = pmdp_get_lockless(pmd); > if (pmdvalp) > *pmdvalp = pmdval; It might be a good idea to document that this series assumes that the first argument to __pte_offset_map() is a pointer into a second-level page table (and not a local copy of the entry) unless the containing VMA is known to not be THP-eligible or the page table is detached from the page table hierarchy or something like that. Currently a bunch of places pass references to local copies of the entry, and while I think all of these are fine, it would probably be good to at least document why these are allowed to do it while other places aren't. $ vgrep 'pte_offset_map(&' Index File Line Content 0 arch/sparc/mm/tlb.c 151 pte = pte_offset_map(&pmd, vaddr); 1 kernel/events/core.c 7501 ptep = pte_offset_map(&pmd, addr); 2 mm/gup.c 2460 ptem = ptep = pte_offset_map(&pmd, addr); 3 mm/huge_memory.c 2057 pte = pte_offset_map(&_pmd, haddr); 4 mm/huge_memory.c 2214 pte = pte_offset_map(&_pmd, haddr); 5 mm/page_table_check.c 240 pte_t *ptep = pte_offset_map(&pmd, addr);
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index a1326e61d7ee..8b0fc7fdc46f 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -99,7 +99,7 @@ static inline pte_t *pte_offset_kernel(pmd_t *pmd, unsigned long address) ((pte_t *)kmap_local_page(pmd_page(*(pmd))) + pte_index((address))) #define pte_unmap(pte) do { \ kunmap_local((pte)); \ - /* rcu_read_unlock() to be added later */ \ + rcu_read_unlock(); \ } while (0) #else static inline pte_t *__pte_map(pmd_t *pmd, unsigned long address) @@ -108,7 +108,7 @@ static inline pte_t *__pte_map(pmd_t *pmd, unsigned long address) } static inline void pte_unmap(pte_t *pte) { - /* rcu_read_unlock() to be added later */ + rcu_read_unlock(); } #endif diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index c7ab18a5fb77..674671835631 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -236,7 +236,7 @@ pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp) { pmd_t pmdval; - /* rcu_read_lock() to be added later */ + rcu_read_lock(); pmdval = pmdp_get_lockless(pmd); if (pmdvalp) *pmdvalp = pmdval; @@ -250,7 +250,7 @@ pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp) } return __pte_map(&pmdval, addr); nomap: - /* rcu_read_unlock() to be added later */ + rcu_read_unlock(); return NULL; }
Before putting them to use (several commits later), add rcu_read_lock() to pte_offset_map(), and rcu_read_unlock() to pte_unmap(). Make this a separate commit, since it risks exposing imbalances: prior commits have fixed all the known imbalances, but we may find some have been missed. Signed-off-by: Hugh Dickins <hughd@google.com> --- include/linux/pgtable.h | 4 ++-- mm/pgtable-generic.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-)