Message ID | 20200420221126.341272-6-hannes@cmpxchg.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm: memcontrol: charge swapin pages on instantiation | expand |
在 2020/4/21 上午6:11, Johannes Weiner 写道: > The try/commit/cancel protocol that memcg uses dates back to when > pages used to be uncharged upon removal from the page cache, and thus > couldn't be committed before the insertion had succeeded. Nowadays, > pages are uncharged when they are physically freed; it doesn't matter > whether the insertion was successful or not. For the page cache, the > transaction dance has become unnecessary. > > Introduce a mem_cgroup_charge() function that simply charges a newly > allocated page to a cgroup and sets up page->mem_cgroup in one single > step. If the insertion fails, the caller doesn't have to do anything > but free/put the page. > > Then switch the page cache over to this new API. > > Subsequent patches will also convert anon pages, but it needs a bit > more prep work. Right now, memcg depends on page->mapping being > already set up at the time of charging, so that it can maintain its > own MEMCG_CACHE and MEMCG_RSS counters. For anon, page->mapping is set > under the same pte lock under which the page is publishd, so a single > charge point that can block doesn't work there just yet. > > The following prep patches will replace the private memcg counters > with the generic vmstat counters, thus removing the page->mapping > dependency, then complete the transition to the new single-point > charge API and delete the old transactional scheme. > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> > --- Reviewed-by: Alex Shi <alex.shi@linux.alibaba.com>
On Mon, Apr 20, 2020 at 06:11:13PM -0400, Johannes Weiner wrote: > The try/commit/cancel protocol that memcg uses dates back to when > pages used to be uncharged upon removal from the page cache, and thus > couldn't be committed before the insertion had succeeded. Nowadays, > pages are uncharged when they are physically freed; it doesn't matter > whether the insertion was successful or not. For the page cache, the > transaction dance has become unnecessary. > > Introduce a mem_cgroup_charge() function that simply charges a newly > allocated page to a cgroup and sets up page->mem_cgroup in one single > step. If the insertion fails, the caller doesn't have to do anything > but free/put the page. > > Then switch the page cache over to this new API. > > Subsequent patches will also convert anon pages, but it needs a bit > more prep work. Right now, memcg depends on page->mapping being > already set up at the time of charging, so that it can maintain its > own MEMCG_CACHE and MEMCG_RSS counters. For anon, page->mapping is set > under the same pte lock under which the page is publishd, so a single > charge point that can block doesn't work there just yet. > > The following prep patches will replace the private memcg counters > with the generic vmstat counters, thus removing the page->mapping > dependency, then complete the transition to the new single-point > charge API and delete the old transactional scheme. > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> > --- > include/linux/memcontrol.h | 10 ++++ > mm/filemap.c | 24 ++++------ > mm/memcontrol.c | 27 +++++++++++ > mm/shmem.c | 97 +++++++++++++++++--------------------- > 4 files changed, 89 insertions(+), 69 deletions(-) > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index c7875a48c8c1..5e8b0e38f145 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -367,6 +367,10 @@ int mem_cgroup_try_charge_delay(struct page *page, struct mm_struct *mm, > void mem_cgroup_commit_charge(struct page *page, struct mem_cgroup *memcg, > bool lrucare); > void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memcg); > + > +int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, > + bool lrucare); > + > void mem_cgroup_uncharge(struct page *page); > void mem_cgroup_uncharge_list(struct list_head *page_list); > > @@ -872,6 +876,12 @@ static inline void mem_cgroup_cancel_charge(struct page *page, > { > } > > +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm, > + gfp_t gfp_mask, bool lrucare) > +{ > + return 0; > +} > + > static inline void mem_cgroup_uncharge(struct page *page) > { > } > diff --git a/mm/filemap.c b/mm/filemap.c > index 5b31af9d5b1b..5bdbda965177 100644 > --- a/mm/filemap.c > +++ b/mm/filemap.c > @@ -832,7 +832,6 @@ static int __add_to_page_cache_locked(struct page *page, > { > XA_STATE(xas, &mapping->i_pages, offset); > int huge = PageHuge(page); > - struct mem_cgroup *memcg; > int error; > void *old; > > @@ -840,17 +839,16 @@ static int __add_to_page_cache_locked(struct page *page, > VM_BUG_ON_PAGE(PageSwapBacked(page), page); > mapping_set_update(&xas, mapping); > > - if (!huge) { > - error = mem_cgroup_try_charge(page, current->mm, > - gfp_mask, &memcg); > - if (error) > - return error; > - } > - > get_page(page); > page->mapping = mapping; > page->index = offset; > > + if (!huge) { > + error = mem_cgroup_charge(page, current->mm, gfp_mask, false); > + if (error) > + goto error; > + } > + > do { > xas_lock_irq(&xas); > old = xas_load(&xas); > @@ -874,20 +872,18 @@ static int __add_to_page_cache_locked(struct page *page, > xas_unlock_irq(&xas); > } while (xas_nomem(&xas, gfp_mask & GFP_RECLAIM_MASK)); > > - if (xas_error(&xas)) > + if (xas_error(&xas)) { > + error = xas_error(&xas); > goto error; > + } > > - if (!huge) > - mem_cgroup_commit_charge(page, memcg, false); > trace_mm_filemap_add_to_page_cache(page); > return 0; > error: > page->mapping = NULL; > /* Leave page->index set: truncation relies upon it */ > - if (!huge) > - mem_cgroup_cancel_charge(page, memcg); > put_page(page); > - return xas_error(&xas); > + return error; > } > ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, ERRNO); > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 711d6dd5cbb1..b38c0a672d26 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -6577,6 +6577,33 @@ void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memcg) > cancel_charge(memcg, nr_pages); > } > > +/** > + * mem_cgroup_charge - charge a newly allocated page to a cgroup > + * @page: page to charge > + * @mm: mm context of the victim > + * @gfp_mask: reclaim mode > + * @lrucare: page might be on the LRU already > + * > + * Try to charge @page to the memcg that @mm belongs to, reclaiming > + * pages according to @gfp_mask if necessary. > + * > + * Returns 0 on success. Otherwise, an error code is returned. > + */ > +int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, > + bool lrucare) > +{ > + struct mem_cgroup *memcg; > + int ret; > + > + VM_BUG_ON_PAGE(!page->mapping, page); > + > + ret = mem_cgroup_try_charge(page, mm, gfp_mask, &memcg); > + if (ret) > + return ret; > + mem_cgroup_commit_charge(page, memcg, lrucare); > + return 0; > +} > + > struct uncharge_gather { > struct mem_cgroup *memcg; > unsigned long pgpgout; > diff --git a/mm/shmem.c b/mm/shmem.c > index 52c66801321e..2384f6c7ef71 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -605,11 +605,13 @@ static inline bool is_huge_enabled(struct shmem_sb_info *sbinfo) > */ > static int shmem_add_to_page_cache(struct page *page, > struct address_space *mapping, > - pgoff_t index, void *expected, gfp_t gfp) > + pgoff_t index, void *expected, gfp_t gfp, > + struct mm_struct *charge_mm) > { > XA_STATE_ORDER(xas, &mapping->i_pages, index, compound_order(page)); > unsigned long i = 0; > unsigned long nr = compound_nr(page); > + int error; > > VM_BUG_ON_PAGE(PageTail(page), page); > VM_BUG_ON_PAGE(index != round_down(index, nr), page); > @@ -621,6 +623,16 @@ static int shmem_add_to_page_cache(struct page *page, > page->mapping = mapping; > page->index = index; > > + error = mem_cgroup_charge(page, charge_mm, gfp, PageSwapCache(page)); > + if (error) { > + if (!PageSwapCache(page) && PageTransHuge(page)) { > + count_vm_event(THP_FILE_FALLBACK); > + count_vm_event(THP_FILE_FALLBACK_CHARGE); > + } > + goto error; > + } > + cgroup_throttle_swaprate(page, gfp); > + > do { > void *entry; > xas_lock_irq(&xas); > @@ -648,12 +660,15 @@ static int shmem_add_to_page_cache(struct page *page, > } while (xas_nomem(&xas, gfp)); > > if (xas_error(&xas)) { > - page->mapping = NULL; > - page_ref_sub(page, nr); > - return xas_error(&xas); > + error = xas_error(&xas); > + goto error; > } > > return 0; > +error: > + page->mapping = NULL; > + page_ref_sub(page, nr); > + return error; > } > > /* > @@ -1619,7 +1634,6 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, > struct address_space *mapping = inode->i_mapping; > struct shmem_inode_info *info = SHMEM_I(inode); > struct mm_struct *charge_mm = vma ? vma->vm_mm : current->mm; > - struct mem_cgroup *memcg; > struct page *page; > swp_entry_t swap; > int error; > @@ -1664,29 +1678,22 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, > goto failed; > } > > - error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg); > - if (!error) { > - error = shmem_add_to_page_cache(page, mapping, index, > - swp_to_radix_entry(swap), gfp); > - /* > - * We already confirmed swap under page lock, and make > - * no memory allocation here, so usually no possibility > - * of error; but free_swap_and_cache() only trylocks a > - * page, so it is just possible that the entry has been > - * truncated or holepunched since swap was confirmed. > - * shmem_undo_range() will have done some of the > - * unaccounting, now delete_from_swap_cache() will do > - * the rest. > - */ > - if (error) { > - mem_cgroup_cancel_charge(page, memcg); > - delete_from_swap_cache(page); > - } > - } > - if (error) > + error = shmem_add_to_page_cache(page, mapping, index, > + swp_to_radix_entry(swap), gfp, > + charge_mm); > + /* > + * We already confirmed swap under page lock, and make no > + * memory allocation here, so usually no possibility of error; > + * but free_swap_and_cache() only trylocks a page, so it is > + * just possible that the entry has been truncated or > + * holepunched since swap was confirmed. shmem_undo_range() > + * will have done some of the unaccounting, now > + * delete_from_swap_cache() will do the rest. > + */ > + if (error) { > + delete_from_swap_cache(page); > goto failed; -EEXIST (from swap cache) and -ENOMEM (from memcg) should be handled differently. delete_from_swap_cache() is for -EEXIST case. Thanks.
On Wed, Apr 22, 2020 at 03:40:41PM +0900, Joonsoo Kim wrote: > On Mon, Apr 20, 2020 at 06:11:13PM -0400, Johannes Weiner wrote: > > The try/commit/cancel protocol that memcg uses dates back to when > > pages used to be uncharged upon removal from the page cache, and thus > > couldn't be committed before the insertion had succeeded. Nowadays, > > pages are uncharged when they are physically freed; it doesn't matter > > whether the insertion was successful or not. For the page cache, the > > transaction dance has become unnecessary. > > > > Introduce a mem_cgroup_charge() function that simply charges a newly > > allocated page to a cgroup and sets up page->mem_cgroup in one single > > step. If the insertion fails, the caller doesn't have to do anything > > but free/put the page. > > > > Then switch the page cache over to this new API. > > > > Subsequent patches will also convert anon pages, but it needs a bit > > more prep work. Right now, memcg depends on page->mapping being > > already set up at the time of charging, so that it can maintain its > > own MEMCG_CACHE and MEMCG_RSS counters. For anon, page->mapping is set > > under the same pte lock under which the page is publishd, so a single > > charge point that can block doesn't work there just yet. > > > > The following prep patches will replace the private memcg counters > > with the generic vmstat counters, thus removing the page->mapping > > dependency, then complete the transition to the new single-point > > charge API and delete the old transactional scheme. > > > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> > > --- > > include/linux/memcontrol.h | 10 ++++ > > mm/filemap.c | 24 ++++------ > > mm/memcontrol.c | 27 +++++++++++ > > mm/shmem.c | 97 +++++++++++++++++--------------------- > > 4 files changed, 89 insertions(+), 69 deletions(-) > > > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > > index c7875a48c8c1..5e8b0e38f145 100644 > > --- a/include/linux/memcontrol.h > > +++ b/include/linux/memcontrol.h > > @@ -367,6 +367,10 @@ int mem_cgroup_try_charge_delay(struct page *page, struct mm_struct *mm, > > void mem_cgroup_commit_charge(struct page *page, struct mem_cgroup *memcg, > > bool lrucare); > > void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memcg); > > + > > +int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, > > + bool lrucare); > > + > > void mem_cgroup_uncharge(struct page *page); > > void mem_cgroup_uncharge_list(struct list_head *page_list); > > > > @@ -872,6 +876,12 @@ static inline void mem_cgroup_cancel_charge(struct page *page, > > { > > } > > > > +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm, > > + gfp_t gfp_mask, bool lrucare) > > +{ > > + return 0; > > +} > > + > > static inline void mem_cgroup_uncharge(struct page *page) > > { > > } > > diff --git a/mm/filemap.c b/mm/filemap.c > > index 5b31af9d5b1b..5bdbda965177 100644 > > --- a/mm/filemap.c > > +++ b/mm/filemap.c > > @@ -832,7 +832,6 @@ static int __add_to_page_cache_locked(struct page *page, > > { > > XA_STATE(xas, &mapping->i_pages, offset); > > int huge = PageHuge(page); > > - struct mem_cgroup *memcg; > > int error; > > void *old; > > > > @@ -840,17 +839,16 @@ static int __add_to_page_cache_locked(struct page *page, > > VM_BUG_ON_PAGE(PageSwapBacked(page), page); > > mapping_set_update(&xas, mapping); > > > > - if (!huge) { > > - error = mem_cgroup_try_charge(page, current->mm, > > - gfp_mask, &memcg); > > - if (error) > > - return error; > > - } > > - > > get_page(page); > > page->mapping = mapping; > > page->index = offset; > > > > + if (!huge) { > > + error = mem_cgroup_charge(page, current->mm, gfp_mask, false); > > + if (error) > > + goto error; > > + } > > + > > do { > > xas_lock_irq(&xas); > > old = xas_load(&xas); > > @@ -874,20 +872,18 @@ static int __add_to_page_cache_locked(struct page *page, > > xas_unlock_irq(&xas); > > } while (xas_nomem(&xas, gfp_mask & GFP_RECLAIM_MASK)); > > > > - if (xas_error(&xas)) > > + if (xas_error(&xas)) { > > + error = xas_error(&xas); > > goto error; > > + } > > > > - if (!huge) > > - mem_cgroup_commit_charge(page, memcg, false); > > trace_mm_filemap_add_to_page_cache(page); > > return 0; > > error: > > page->mapping = NULL; > > /* Leave page->index set: truncation relies upon it */ > > - if (!huge) > > - mem_cgroup_cancel_charge(page, memcg); > > put_page(page); > > - return xas_error(&xas); > > + return error; > > } > > ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, ERRNO); > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > index 711d6dd5cbb1..b38c0a672d26 100644 > > --- a/mm/memcontrol.c > > +++ b/mm/memcontrol.c > > @@ -6577,6 +6577,33 @@ void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memcg) > > cancel_charge(memcg, nr_pages); > > } > > > > +/** > > + * mem_cgroup_charge - charge a newly allocated page to a cgroup > > + * @page: page to charge > > + * @mm: mm context of the victim > > + * @gfp_mask: reclaim mode > > + * @lrucare: page might be on the LRU already > > + * > > + * Try to charge @page to the memcg that @mm belongs to, reclaiming > > + * pages according to @gfp_mask if necessary. > > + * > > + * Returns 0 on success. Otherwise, an error code is returned. > > + */ > > +int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, > > + bool lrucare) > > +{ > > + struct mem_cgroup *memcg; > > + int ret; > > + > > + VM_BUG_ON_PAGE(!page->mapping, page); > > + > > + ret = mem_cgroup_try_charge(page, mm, gfp_mask, &memcg); > > + if (ret) > > + return ret; > > + mem_cgroup_commit_charge(page, memcg, lrucare); > > + return 0; > > +} > > + > > struct uncharge_gather { > > struct mem_cgroup *memcg; > > unsigned long pgpgout; > > diff --git a/mm/shmem.c b/mm/shmem.c > > index 52c66801321e..2384f6c7ef71 100644 > > --- a/mm/shmem.c > > +++ b/mm/shmem.c > > @@ -605,11 +605,13 @@ static inline bool is_huge_enabled(struct shmem_sb_info *sbinfo) > > */ > > static int shmem_add_to_page_cache(struct page *page, > > struct address_space *mapping, > > - pgoff_t index, void *expected, gfp_t gfp) > > + pgoff_t index, void *expected, gfp_t gfp, > > + struct mm_struct *charge_mm) > > { > > XA_STATE_ORDER(xas, &mapping->i_pages, index, compound_order(page)); > > unsigned long i = 0; > > unsigned long nr = compound_nr(page); > > + int error; > > > > VM_BUG_ON_PAGE(PageTail(page), page); > > VM_BUG_ON_PAGE(index != round_down(index, nr), page); > > @@ -621,6 +623,16 @@ static int shmem_add_to_page_cache(struct page *page, > > page->mapping = mapping; > > page->index = index; > > > > + error = mem_cgroup_charge(page, charge_mm, gfp, PageSwapCache(page)); > > + if (error) { > > + if (!PageSwapCache(page) && PageTransHuge(page)) { > > + count_vm_event(THP_FILE_FALLBACK); > > + count_vm_event(THP_FILE_FALLBACK_CHARGE); > > + } > > + goto error; > > + } > > + cgroup_throttle_swaprate(page, gfp); > > + > > do { > > void *entry; > > xas_lock_irq(&xas); > > @@ -648,12 +660,15 @@ static int shmem_add_to_page_cache(struct page *page, > > } while (xas_nomem(&xas, gfp)); > > > > if (xas_error(&xas)) { > > - page->mapping = NULL; > > - page_ref_sub(page, nr); > > - return xas_error(&xas); > > + error = xas_error(&xas); > > + goto error; > > } > > > > return 0; > > +error: > > + page->mapping = NULL; > > + page_ref_sub(page, nr); > > + return error; > > } > > > > /* > > @@ -1619,7 +1634,6 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, > > struct address_space *mapping = inode->i_mapping; > > struct shmem_inode_info *info = SHMEM_I(inode); > > struct mm_struct *charge_mm = vma ? vma->vm_mm : current->mm; > > - struct mem_cgroup *memcg; > > struct page *page; > > swp_entry_t swap; > > int error; > > @@ -1664,29 +1678,22 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, > > goto failed; > > } > > > > - error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg); > > - if (!error) { > > - error = shmem_add_to_page_cache(page, mapping, index, > > - swp_to_radix_entry(swap), gfp); > > - /* > > - * We already confirmed swap under page lock, and make > > - * no memory allocation here, so usually no possibility > > - * of error; but free_swap_and_cache() only trylocks a > > - * page, so it is just possible that the entry has been > > - * truncated or holepunched since swap was confirmed. > > - * shmem_undo_range() will have done some of the > > - * unaccounting, now delete_from_swap_cache() will do > > - * the rest. > > - */ > > - if (error) { > > - mem_cgroup_cancel_charge(page, memcg); > > - delete_from_swap_cache(page); > > - } > > - } > > - if (error) > > + error = shmem_add_to_page_cache(page, mapping, index, > > + swp_to_radix_entry(swap), gfp, > > + charge_mm); > > + /* > > + * We already confirmed swap under page lock, and make no > > + * memory allocation here, so usually no possibility of error; > > + * but free_swap_and_cache() only trylocks a page, so it is > > + * just possible that the entry has been truncated or > > + * holepunched since swap was confirmed. shmem_undo_range() > > + * will have done some of the unaccounting, now > > + * delete_from_swap_cache() will do the rest. > > + */ > > + if (error) { > > + delete_from_swap_cache(page); > > goto failed; > > -EEXIST (from swap cache) and -ENOMEM (from memcg) should be handled > differently. delete_from_swap_cache() is for -EEXIST case. Good catch, I accidentally changed things here. I was just going to change it back, but now I'm trying to understand how it actually works. Who is removing the page from swap cache if shmem_undo_range() races but we fail to charge the page? Here is how this race is supposed to be handled: The page is in the swapcache, we have it locked and confirmed that the entry in i_pages is indeed a swap entry. We charge the page, then we try to replace the swap entry in i_pages with the actual page. If we determine, under tree lock now, that shmem_undo_range has raced with us, unaccounted the swap space, but must have failed to get the page lock, we remove the page from swap cache on our side, to free up swap slot and page. But what if shmem_undo_range() raced with us, deleted the swap entry from i_pages while we had the page locked, but then we simply failed to charge? We unlock the page and return -EEXIST (shmem_confirm_swap at the exit). The page with its userdata is now in swapcache, but no corresponding swap entry in i_pages. shmem_getpage_gfp() sees the -EEXIST, retries, finds nothing in i_pages and allocates a new, empty page. Aren't we leaking the swap slot and the page?
On Wed, Apr 22, 2020 at 08:09:46AM -0400, Johannes Weiner wrote: > On Wed, Apr 22, 2020 at 03:40:41PM +0900, Joonsoo Kim wrote: > > On Mon, Apr 20, 2020 at 06:11:13PM -0400, Johannes Weiner wrote: > > > The try/commit/cancel protocol that memcg uses dates back to when > > > pages used to be uncharged upon removal from the page cache, and thus > > > couldn't be committed before the insertion had succeeded. Nowadays, > > > pages are uncharged when they are physically freed; it doesn't matter > > > whether the insertion was successful or not. For the page cache, the > > > transaction dance has become unnecessary. > > > > > > Introduce a mem_cgroup_charge() function that simply charges a newly > > > allocated page to a cgroup and sets up page->mem_cgroup in one single > > > step. If the insertion fails, the caller doesn't have to do anything > > > but free/put the page. > > > > > > Then switch the page cache over to this new API. > > > > > > Subsequent patches will also convert anon pages, but it needs a bit > > > more prep work. Right now, memcg depends on page->mapping being > > > already set up at the time of charging, so that it can maintain its > > > own MEMCG_CACHE and MEMCG_RSS counters. For anon, page->mapping is set > > > under the same pte lock under which the page is publishd, so a single > > > charge point that can block doesn't work there just yet. > > > > > > The following prep patches will replace the private memcg counters > > > with the generic vmstat counters, thus removing the page->mapping > > > dependency, then complete the transition to the new single-point > > > charge API and delete the old transactional scheme. > > > > > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> > > > --- > > > include/linux/memcontrol.h | 10 ++++ > > > mm/filemap.c | 24 ++++------ > > > mm/memcontrol.c | 27 +++++++++++ > > > mm/shmem.c | 97 +++++++++++++++++--------------------- > > > 4 files changed, 89 insertions(+), 69 deletions(-) > > > > > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > > > index c7875a48c8c1..5e8b0e38f145 100644 > > > --- a/include/linux/memcontrol.h > > > +++ b/include/linux/memcontrol.h > > > @@ -367,6 +367,10 @@ int mem_cgroup_try_charge_delay(struct page *page, struct mm_struct *mm, > > > void mem_cgroup_commit_charge(struct page *page, struct mem_cgroup *memcg, > > > bool lrucare); > > > void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memcg); > > > + > > > +int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, > > > + bool lrucare); > > > + > > > void mem_cgroup_uncharge(struct page *page); > > > void mem_cgroup_uncharge_list(struct list_head *page_list); > > > > > > @@ -872,6 +876,12 @@ static inline void mem_cgroup_cancel_charge(struct page *page, > > > { > > > } > > > > > > +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm, > > > + gfp_t gfp_mask, bool lrucare) > > > +{ > > > + return 0; > > > +} > > > + > > > static inline void mem_cgroup_uncharge(struct page *page) > > > { > > > } > > > diff --git a/mm/filemap.c b/mm/filemap.c > > > index 5b31af9d5b1b..5bdbda965177 100644 > > > --- a/mm/filemap.c > > > +++ b/mm/filemap.c > > > @@ -832,7 +832,6 @@ static int __add_to_page_cache_locked(struct page *page, > > > { > > > XA_STATE(xas, &mapping->i_pages, offset); > > > int huge = PageHuge(page); > > > - struct mem_cgroup *memcg; > > > int error; > > > void *old; > > > > > > @@ -840,17 +839,16 @@ static int __add_to_page_cache_locked(struct page *page, > > > VM_BUG_ON_PAGE(PageSwapBacked(page), page); > > > mapping_set_update(&xas, mapping); > > > > > > - if (!huge) { > > > - error = mem_cgroup_try_charge(page, current->mm, > > > - gfp_mask, &memcg); > > > - if (error) > > > - return error; > > > - } > > > - > > > get_page(page); > > > page->mapping = mapping; > > > page->index = offset; > > > > > > + if (!huge) { > > > + error = mem_cgroup_charge(page, current->mm, gfp_mask, false); > > > + if (error) > > > + goto error; > > > + } > > > + > > > do { > > > xas_lock_irq(&xas); > > > old = xas_load(&xas); > > > @@ -874,20 +872,18 @@ static int __add_to_page_cache_locked(struct page *page, > > > xas_unlock_irq(&xas); > > > } while (xas_nomem(&xas, gfp_mask & GFP_RECLAIM_MASK)); > > > > > > - if (xas_error(&xas)) > > > + if (xas_error(&xas)) { > > > + error = xas_error(&xas); > > > goto error; > > > + } > > > > > > - if (!huge) > > > - mem_cgroup_commit_charge(page, memcg, false); > > > trace_mm_filemap_add_to_page_cache(page); > > > return 0; > > > error: > > > page->mapping = NULL; > > > /* Leave page->index set: truncation relies upon it */ > > > - if (!huge) > > > - mem_cgroup_cancel_charge(page, memcg); > > > put_page(page); > > > - return xas_error(&xas); > > > + return error; > > > } > > > ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, ERRNO); > > > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > > index 711d6dd5cbb1..b38c0a672d26 100644 > > > --- a/mm/memcontrol.c > > > +++ b/mm/memcontrol.c > > > @@ -6577,6 +6577,33 @@ void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memcg) > > > cancel_charge(memcg, nr_pages); > > > } > > > > > > +/** > > > + * mem_cgroup_charge - charge a newly allocated page to a cgroup > > > + * @page: page to charge > > > + * @mm: mm context of the victim > > > + * @gfp_mask: reclaim mode > > > + * @lrucare: page might be on the LRU already > > > + * > > > + * Try to charge @page to the memcg that @mm belongs to, reclaiming > > > + * pages according to @gfp_mask if necessary. > > > + * > > > + * Returns 0 on success. Otherwise, an error code is returned. > > > + */ > > > +int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, > > > + bool lrucare) > > > +{ > > > + struct mem_cgroup *memcg; > > > + int ret; > > > + > > > + VM_BUG_ON_PAGE(!page->mapping, page); > > > + > > > + ret = mem_cgroup_try_charge(page, mm, gfp_mask, &memcg); > > > + if (ret) > > > + return ret; > > > + mem_cgroup_commit_charge(page, memcg, lrucare); > > > + return 0; > > > +} > > > + > > > struct uncharge_gather { > > > struct mem_cgroup *memcg; > > > unsigned long pgpgout; > > > diff --git a/mm/shmem.c b/mm/shmem.c > > > index 52c66801321e..2384f6c7ef71 100644 > > > --- a/mm/shmem.c > > > +++ b/mm/shmem.c > > > @@ -605,11 +605,13 @@ static inline bool is_huge_enabled(struct shmem_sb_info *sbinfo) > > > */ > > > static int shmem_add_to_page_cache(struct page *page, > > > struct address_space *mapping, > > > - pgoff_t index, void *expected, gfp_t gfp) > > > + pgoff_t index, void *expected, gfp_t gfp, > > > + struct mm_struct *charge_mm) > > > { > > > XA_STATE_ORDER(xas, &mapping->i_pages, index, compound_order(page)); > > > unsigned long i = 0; > > > unsigned long nr = compound_nr(page); > > > + int error; > > > > > > VM_BUG_ON_PAGE(PageTail(page), page); > > > VM_BUG_ON_PAGE(index != round_down(index, nr), page); > > > @@ -621,6 +623,16 @@ static int shmem_add_to_page_cache(struct page *page, > > > page->mapping = mapping; > > > page->index = index; > > > > > > + error = mem_cgroup_charge(page, charge_mm, gfp, PageSwapCache(page)); > > > + if (error) { > > > + if (!PageSwapCache(page) && PageTransHuge(page)) { > > > + count_vm_event(THP_FILE_FALLBACK); > > > + count_vm_event(THP_FILE_FALLBACK_CHARGE); > > > + } > > > + goto error; > > > + } > > > + cgroup_throttle_swaprate(page, gfp); > > > + > > > do { > > > void *entry; > > > xas_lock_irq(&xas); > > > @@ -648,12 +660,15 @@ static int shmem_add_to_page_cache(struct page *page, > > > } while (xas_nomem(&xas, gfp)); > > > > > > if (xas_error(&xas)) { > > > - page->mapping = NULL; > > > - page_ref_sub(page, nr); > > > - return xas_error(&xas); > > > + error = xas_error(&xas); > > > + goto error; > > > } > > > > > > return 0; > > > +error: > > > + page->mapping = NULL; > > > + page_ref_sub(page, nr); > > > + return error; > > > } > > > > > > /* > > > @@ -1619,7 +1634,6 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, > > > struct address_space *mapping = inode->i_mapping; > > > struct shmem_inode_info *info = SHMEM_I(inode); > > > struct mm_struct *charge_mm = vma ? vma->vm_mm : current->mm; > > > - struct mem_cgroup *memcg; > > > struct page *page; > > > swp_entry_t swap; > > > int error; > > > @@ -1664,29 +1678,22 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, > > > goto failed; > > > } > > > > > > - error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg); > > > - if (!error) { > > > - error = shmem_add_to_page_cache(page, mapping, index, > > > - swp_to_radix_entry(swap), gfp); > > > - /* > > > - * We already confirmed swap under page lock, and make > > > - * no memory allocation here, so usually no possibility > > > - * of error; but free_swap_and_cache() only trylocks a > > > - * page, so it is just possible that the entry has been > > > - * truncated or holepunched since swap was confirmed. > > > - * shmem_undo_range() will have done some of the > > > - * unaccounting, now delete_from_swap_cache() will do > > > - * the rest. > > > - */ > > > - if (error) { > > > - mem_cgroup_cancel_charge(page, memcg); > > > - delete_from_swap_cache(page); > > > - } > > > - } > > > - if (error) > > > + error = shmem_add_to_page_cache(page, mapping, index, > > > + swp_to_radix_entry(swap), gfp, > > > + charge_mm); > > > + /* > > > + * We already confirmed swap under page lock, and make no > > > + * memory allocation here, so usually no possibility of error; > > > + * but free_swap_and_cache() only trylocks a page, so it is > > > + * just possible that the entry has been truncated or > > > + * holepunched since swap was confirmed. shmem_undo_range() > > > + * will have done some of the unaccounting, now > > > + * delete_from_swap_cache() will do the rest. > > > + */ > > > + if (error) { > > > + delete_from_swap_cache(page); > > > goto failed; > > > > -EEXIST (from swap cache) and -ENOMEM (from memcg) should be handled > > differently. delete_from_swap_cache() is for -EEXIST case. > > Good catch, I accidentally changed things here. > > I was just going to change it back, but now I'm trying to understand > how it actually works. > > Who is removing the page from swap cache if shmem_undo_range() races > but we fail to charge the page? > > Here is how this race is supposed to be handled: The page is in the > swapcache, we have it locked and confirmed that the entry in i_pages > is indeed a swap entry. We charge the page, then we try to replace the > swap entry in i_pages with the actual page. If we determine, under > tree lock now, that shmem_undo_range has raced with us, unaccounted > the swap space, but must have failed to get the page lock, we remove > the page from swap cache on our side, to free up swap slot and page. > > But what if shmem_undo_range() raced with us, deleted the swap entry > from i_pages while we had the page locked, but then we simply failed > to charge? We unlock the page and return -EEXIST (shmem_confirm_swap > at the exit). The page with its userdata is now in swapcache, but no > corresponding swap entry in i_pages. shmem_getpage_gfp() sees the > -EEXIST, retries, finds nothing in i_pages and allocates a new, empty > page. > > Aren't we leaking the swap slot and the page? Yes, you're right! It seems that it's possible to leak the swap slot and the page. Race could happen for all the places after lock_page() and shmem_confirm_swap() are done. And, I think that it's not possible to fix the problem in shmem_swapin_page() side since we can't know the timing that trylock_page() is called. Maybe, solution would be, instead of using free_swap_and_cache() in shmem_undo_range() that calls trylock_page(), to use another function that calls lock_page(). Thanks.
On Thu, Apr 23, 2020 at 02:25:06PM +0900, Joonsoo Kim wrote: > On Wed, Apr 22, 2020 at 08:09:46AM -0400, Johannes Weiner wrote: > > On Wed, Apr 22, 2020 at 03:40:41PM +0900, Joonsoo Kim wrote: > > > On Mon, Apr 20, 2020 at 06:11:13PM -0400, Johannes Weiner wrote: > > > > @@ -1664,29 +1678,22 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, > > > > goto failed; > > > > } > > > > > > > > - error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg); > > > > - if (!error) { > > > > - error = shmem_add_to_page_cache(page, mapping, index, > > > > - swp_to_radix_entry(swap), gfp); > > > > - /* > > > > - * We already confirmed swap under page lock, and make > > > > - * no memory allocation here, so usually no possibility > > > > - * of error; but free_swap_and_cache() only trylocks a > > > > - * page, so it is just possible that the entry has been > > > > - * truncated or holepunched since swap was confirmed. > > > > - * shmem_undo_range() will have done some of the > > > > - * unaccounting, now delete_from_swap_cache() will do > > > > - * the rest. > > > > - */ > > > > - if (error) { > > > > - mem_cgroup_cancel_charge(page, memcg); > > > > - delete_from_swap_cache(page); > > > > - } > > > > - } > > > > - if (error) > > > > + error = shmem_add_to_page_cache(page, mapping, index, > > > > + swp_to_radix_entry(swap), gfp, > > > > + charge_mm); > > > > + /* > > > > + * We already confirmed swap under page lock, and make no > > > > + * memory allocation here, so usually no possibility of error; > > > > + * but free_swap_and_cache() only trylocks a page, so it is > > > > + * just possible that the entry has been truncated or > > > > + * holepunched since swap was confirmed. shmem_undo_range() > > > > + * will have done some of the unaccounting, now > > > > + * delete_from_swap_cache() will do the rest. > > > > + */ > > > > + if (error) { > > > > + delete_from_swap_cache(page); > > > > goto failed; > > > > > > -EEXIST (from swap cache) and -ENOMEM (from memcg) should be handled > > > differently. delete_from_swap_cache() is for -EEXIST case. > > > > Good catch, I accidentally changed things here. > > > > I was just going to change it back, but now I'm trying to understand > > how it actually works. > > > > Who is removing the page from swap cache if shmem_undo_range() races > > but we fail to charge the page? > > > > Here is how this race is supposed to be handled: The page is in the > > swapcache, we have it locked and confirmed that the entry in i_pages > > is indeed a swap entry. We charge the page, then we try to replace the > > swap entry in i_pages with the actual page. If we determine, under > > tree lock now, that shmem_undo_range has raced with us, unaccounted > > the swap space, but must have failed to get the page lock, we remove > > the page from swap cache on our side, to free up swap slot and page. > > > > But what if shmem_undo_range() raced with us, deleted the swap entry > > from i_pages while we had the page locked, but then we simply failed > > to charge? We unlock the page and return -EEXIST (shmem_confirm_swap > > at the exit). The page with its userdata is now in swapcache, but no > > corresponding swap entry in i_pages. shmem_getpage_gfp() sees the > > -EEXIST, retries, finds nothing in i_pages and allocates a new, empty > > page. > > > > Aren't we leaking the swap slot and the page? > > Yes, you're right! It seems that it's possible to leak the swap slot > and the page. Race could happen for all the places after lock_page() > and shmem_confirm_swap() are done. And, I think that it's not possible > to fix the problem in shmem_swapin_page() side since we can't know the > timing that trylock_page() is called. Maybe, solution would be, > instead of using free_swap_and_cache() in shmem_undo_range() that > calls trylock_page(), to use another function that calls lock_page(). I looked at this some more, as well as compared it to non-shmem swapping. My conclusion is - and Hugh may correct me on this - that the deletion looks mandatory but is actually an optimization. Page reclaim will ultimately pick these pages up. When non-shmem pages are swapped in by readahead (locked until IO completes) and their page tables are simultaneously unmapped, the zap_pte_range() code calls free_swap_and_cache() and the locked pages are stranded in the swap cache with no page table references. We rely on page reclaim to pick them up later on. The same appears to be true for shmem. If the references to the swap page are zapped while we're trying to swap in, we can strand the page in the swap cache. But it's not up to swapin to detect this reliably, it just frees the page more quickly than having to wait for reclaim. That being said, my patch introduces potentially undesirable behavior (although AFAICS no correctness problem): We should only delete the page from swapcache when we actually raced with undo_range - which we see from the swap entry having been purged from the page cache tree. If we delete the page from swapcache just because we failed to charge it, the next fault has to read the still-valid page again from the swap device. I'm going to include this: diff --git a/mm/shmem.c b/mm/shmem.c index e80167927dce..236642775f89 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -640,7 +640,7 @@ static int shmem_add_to_page_cache(struct page *page, xas_lock_irq(&xas); entry = xas_find_conflict(&xas); if (entry != expected) - xas_set_err(&xas, -EEXIST); + xas_set_err(&xas, expected ? -ENOENT : -EEXIST); xas_create_range(&xas); if (xas_error(&xas)) goto unlock; @@ -1683,17 +1683,18 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, error = shmem_add_to_page_cache(page, mapping, index, swp_to_radix_entry(swap), gfp, charge_mm); - /* - * We already confirmed swap under page lock, and make no - * memory allocation here, so usually no possibility of error; - * but free_swap_and_cache() only trylocks a page, so it is - * just possible that the entry has been truncated or - * holepunched since swap was confirmed. shmem_undo_range() - * will have done some of the unaccounting, now - * delete_from_swap_cache() will do the rest. - */ if (error) { - delete_from_swap_cache(page); + /* + * We already confirmed swap under page lock, but + * free_swap_and_cache() only trylocks a page, so it + * is just possible that the entry has been truncated + * or holepunched since swap was confirmed. + * shmem_undo_range() will have done some of the + * unaccounting, now delete_from_swap_cache() will do + * the rest. + */ + if (error == -ENOENT) + delete_from_swap_cache(page); goto failed; }
On Fri, May 08, 2020 at 12:01:22PM -0400, Johannes Weiner wrote: > On Thu, Apr 23, 2020 at 02:25:06PM +0900, Joonsoo Kim wrote: > > On Wed, Apr 22, 2020 at 08:09:46AM -0400, Johannes Weiner wrote: > > > On Wed, Apr 22, 2020 at 03:40:41PM +0900, Joonsoo Kim wrote: > > > > On Mon, Apr 20, 2020 at 06:11:13PM -0400, Johannes Weiner wrote: > > > > > @@ -1664,29 +1678,22 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, > > > > > goto failed; > > > > > } > > > > > > > > > > - error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg); > > > > > - if (!error) { > > > > > - error = shmem_add_to_page_cache(page, mapping, index, > > > > > - swp_to_radix_entry(swap), gfp); > > > > > - /* > > > > > - * We already confirmed swap under page lock, and make > > > > > - * no memory allocation here, so usually no possibility > > > > > - * of error; but free_swap_and_cache() only trylocks a > > > > > - * page, so it is just possible that the entry has been > > > > > - * truncated or holepunched since swap was confirmed. > > > > > - * shmem_undo_range() will have done some of the > > > > > - * unaccounting, now delete_from_swap_cache() will do > > > > > - * the rest. > > > > > - */ > > > > > - if (error) { > > > > > - mem_cgroup_cancel_charge(page, memcg); > > > > > - delete_from_swap_cache(page); > > > > > - } > > > > > - } > > > > > - if (error) > > > > > + error = shmem_add_to_page_cache(page, mapping, index, > > > > > + swp_to_radix_entry(swap), gfp, > > > > > + charge_mm); > > > > > + /* > > > > > + * We already confirmed swap under page lock, and make no > > > > > + * memory allocation here, so usually no possibility of error; > > > > > + * but free_swap_and_cache() only trylocks a page, so it is > > > > > + * just possible that the entry has been truncated or > > > > > + * holepunched since swap was confirmed. shmem_undo_range() > > > > > + * will have done some of the unaccounting, now > > > > > + * delete_from_swap_cache() will do the rest. > > > > > + */ > > > > > + if (error) { > > > > > + delete_from_swap_cache(page); > > > > > goto failed; > > > > > > > > -EEXIST (from swap cache) and -ENOMEM (from memcg) should be handled > > > > differently. delete_from_swap_cache() is for -EEXIST case. > > > > > > Good catch, I accidentally changed things here. > > > > > > I was just going to change it back, but now I'm trying to understand > > > how it actually works. > > > > > > Who is removing the page from swap cache if shmem_undo_range() races > > > but we fail to charge the page? > > > > > > Here is how this race is supposed to be handled: The page is in the > > > swapcache, we have it locked and confirmed that the entry in i_pages > > > is indeed a swap entry. We charge the page, then we try to replace the > > > swap entry in i_pages with the actual page. If we determine, under > > > tree lock now, that shmem_undo_range has raced with us, unaccounted > > > the swap space, but must have failed to get the page lock, we remove > > > the page from swap cache on our side, to free up swap slot and page. > > > > > > But what if shmem_undo_range() raced with us, deleted the swap entry > > > from i_pages while we had the page locked, but then we simply failed > > > to charge? We unlock the page and return -EEXIST (shmem_confirm_swap > > > at the exit). The page with its userdata is now in swapcache, but no > > > corresponding swap entry in i_pages. shmem_getpage_gfp() sees the > > > -EEXIST, retries, finds nothing in i_pages and allocates a new, empty > > > page. > > > > > > Aren't we leaking the swap slot and the page? > > > > Yes, you're right! It seems that it's possible to leak the swap slot > > and the page. Race could happen for all the places after lock_page() > > and shmem_confirm_swap() are done. And, I think that it's not possible > > to fix the problem in shmem_swapin_page() side since we can't know the > > timing that trylock_page() is called. Maybe, solution would be, > > instead of using free_swap_and_cache() in shmem_undo_range() that > > calls trylock_page(), to use another function that calls lock_page(). > > I looked at this some more, as well as compared it to non-shmem > swapping. My conclusion is - and Hugh may correct me on this - that > the deletion looks mandatory but is actually an optimization. Page > reclaim will ultimately pick these pages up. > > When non-shmem pages are swapped in by readahead (locked until IO > completes) and their page tables are simultaneously unmapped, the > zap_pte_range() code calls free_swap_and_cache() and the locked pages > are stranded in the swap cache with no page table references. We rely > on page reclaim to pick them up later on. > > The same appears to be true for shmem. If the references to the swap > page are zapped while we're trying to swap in, we can strand the page > in the swap cache. But it's not up to swapin to detect this reliably, > it just frees the page more quickly than having to wait for reclaim. > > That being said, my patch introduces potentially undesirable behavior > (although AFAICS no correctness problem): We should only delete the > page from swapcache when we actually raced with undo_range - which we > see from the swap entry having been purged from the page cache > tree. If we delete the page from swapcache just because we failed to > charge it, the next fault has to read the still-valid page again from > the swap device. I got it! Thanks for explanation. Thanks.
On Fri, 8 May 2020, Johannes Weiner wrote: > > I looked at this some more, as well as compared it to non-shmem > swapping. My conclusion is - and Hugh may correct me on this - that > the deletion looks mandatory but is actually an optimization. Page > reclaim will ultimately pick these pages up. > > When non-shmem pages are swapped in by readahead (locked until IO > completes) and their page tables are simultaneously unmapped, the > zap_pte_range() code calls free_swap_and_cache() and the locked pages > are stranded in the swap cache with no page table references. We rely > on page reclaim to pick them up later on. > > The same appears to be true for shmem. If the references to the swap > page are zapped while we're trying to swap in, we can strand the page > in the swap cache. But it's not up to swapin to detect this reliably, > it just frees the page more quickly than having to wait for reclaim. I think you've got all that exactly right, thanks for working it out. It originates from v3.7's 215c02bc33bb ("tmpfs: fix shmem_getpage_gfp() VM_BUG_ON") - in which I also had to thank you. I think I chose to do the delete_from_swap_cache() right there, partly because of following shmem_unuse_inode() code which already did that, partly on the basis that while we have to observe the case then it's better to clean it up, and partly out of guilt that our page lock here is what had prevented shmem_undo_range() from completing its job; but I believe you're right that unused swapcache reclaim would sort it out eventually. > > That being said, my patch introduces potentially undesirable behavior > (although AFAICS no correctness problem): We should only delete the > page from swapcache when we actually raced with undo_range - which we > see from the swap entry having been purged from the page cache > tree. If we delete the page from swapcache just because we failed to > charge it, the next fault has to read the still-valid page again from > the swap device. Yes. > > I'm going to include this: I haven't pulled down your V2 series into a tree yet (expecting perhaps a respin from Alex on top, when I hope to switch over to trying them both), so haven't looked into the context and may be wrong... > > diff --git a/mm/shmem.c b/mm/shmem.c > index e80167927dce..236642775f89 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -640,7 +640,7 @@ static int shmem_add_to_page_cache(struct page *page, > xas_lock_irq(&xas); > entry = xas_find_conflict(&xas); > if (entry != expected) > - xas_set_err(&xas, -EEXIST); > + xas_set_err(&xas, expected ? -ENOENT : -EEXIST); Two things on this. Minor matter of taste, I'd prefer that as xas_set_err(&xas, entry ? -EEXIST : -ENOENT); which would be more general and more understandable - but what you have written should be fine for the actual callers. Except... I think returning -ENOENT there will not work correctly, in the case of a punched hole. Because (unless you've reworked it and I just haven't looked) shmem_getpage_gfp() knows to retry in the case of -EEXIST, but -ENOENT will percolate up to shmem_fault() and result in a SIGBUS, or a read/write error, when the hole should just get refilled instead. Not something that needs fixing in a hurry (it took trinity to generate this racy case in the first place), I'll take another look once I've pulled it into a tree (or collected next mmotm) - unless you've already have changed it around by then. Hugh > xas_create_range(&xas); > if (xas_error(&xas)) > goto unlock; > @@ -1683,17 +1683,18 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, > error = shmem_add_to_page_cache(page, mapping, index, > swp_to_radix_entry(swap), gfp, > charge_mm); > - /* > - * We already confirmed swap under page lock, and make no > - * memory allocation here, so usually no possibility of error; > - * but free_swap_and_cache() only trylocks a page, so it is > - * just possible that the entry has been truncated or > - * holepunched since swap was confirmed. shmem_undo_range() > - * will have done some of the unaccounting, now > - * delete_from_swap_cache() will do the rest. > - */ > if (error) { > - delete_from_swap_cache(page); > + /* > + * We already confirmed swap under page lock, but > + * free_swap_and_cache() only trylocks a page, so it > + * is just possible that the entry has been truncated > + * or holepunched since swap was confirmed. > + * shmem_undo_range() will have done some of the > + * unaccounting, now delete_from_swap_cache() will do > + * the rest. > + */ > + if (error == -ENOENT) > + delete_from_swap_cache(page); > goto failed; > } > >
On Mon, May 11, 2020 at 12:38:04AM -0700, Hugh Dickins wrote: > On Fri, 8 May 2020, Johannes Weiner wrote: > > > > I looked at this some more, as well as compared it to non-shmem > > swapping. My conclusion is - and Hugh may correct me on this - that > > the deletion looks mandatory but is actually an optimization. Page > > reclaim will ultimately pick these pages up. > > > > When non-shmem pages are swapped in by readahead (locked until IO > > completes) and their page tables are simultaneously unmapped, the > > zap_pte_range() code calls free_swap_and_cache() and the locked pages > > are stranded in the swap cache with no page table references. We rely > > on page reclaim to pick them up later on. > > > > The same appears to be true for shmem. If the references to the swap > > page are zapped while we're trying to swap in, we can strand the page > > in the swap cache. But it's not up to swapin to detect this reliably, > > it just frees the page more quickly than having to wait for reclaim. > > I think you've got all that exactly right, thanks for working it out. > It originates from v3.7's 215c02bc33bb ("tmpfs: fix shmem_getpage_gfp() > VM_BUG_ON") - in which I also had to thank you. I should have looked where it actually came from - I had forgotten about that patch! > I think I chose to do the delete_from_swap_cache() right there, partly > because of following shmem_unuse_inode() code which already did that, > partly on the basis that while we have to observe the case then it's > better to clean it up, and partly out of guilt that our page lock here > is what had prevented shmem_undo_range() from completing its job; but > I believe you're right that unused swapcache reclaim would sort it out > eventually. That makes sense to me. > > diff --git a/mm/shmem.c b/mm/shmem.c > > index e80167927dce..236642775f89 100644 > > --- a/mm/shmem.c > > +++ b/mm/shmem.c > > @@ -640,7 +640,7 @@ static int shmem_add_to_page_cache(struct page *page, > > xas_lock_irq(&xas); > > entry = xas_find_conflict(&xas); > > if (entry != expected) > > - xas_set_err(&xas, -EEXIST); > > + xas_set_err(&xas, expected ? -ENOENT : -EEXIST); > > Two things on this. > > Minor matter of taste, I'd prefer that as > xas_set_err(&xas, entry ? -EEXIST : -ENOENT); > which would be more general and more understandable - > but what you have written should be fine for the actual callers. Yes, checking `expected' was to differentiate the behavior depending on the callsite. But testing `entry' is more obvious in that location. > Except... I think returning -ENOENT there will not work correctly, > in the case of a punched hole. Because (unless you've reworked it > and I just haven't looked) shmem_getpage_gfp() knows to retry in > the case of -EEXIST, but -ENOENT will percolate up to shmem_fault() > and result in a SIGBUS, or a read/write error, when the hole should > just get refilled instead. Good catch, I had indeed missed that. I'm going to make it retry on -ENOENT as well. We could have it go directly to allocating a new page, but it seems unnecessarily complicated: we've already been retrying in this situation until now, so I would stick to "there was a race, retry." > Not something that needs fixing in a hurry (it took trinity to > generate this racy case in the first place), I'll take another look > once I've pulled it into a tree (or collected next mmotm) - unless > you've already have changed it around by then. Attaching a delta fix based on your observations. Andrew, barring any objections to this, could you please fold it into the version you have in your tree already? --- From 33d03ceebce0a6261d472ddc9c5a07940f44714c Mon Sep 17 00:00:00 2001 From: Johannes Weiner <hannes@cmpxchg.org> Date: Mon, 11 May 2020 10:45:14 -0400 Subject: [PATCH] mm: memcontrol: convert page cache to a new mem_cgroup_charge() API fix Incorporate Hugh's feedback: - shmem_getpage_gfp() needs to handle the new -ENOENT that was previously implied in the -EEXIST when a swap entry changed under us in any way. Otherwise hole punching could cause a racing fault to SIGBUS instead of allocating a new page. - It is indeed page reclaim that picks up any swapcache we leave stranded when free_swap_and_cache() runs on a page locked by somebody else. Document that our delete_from_swap_cache() is an optimization, not something we rely on for correctness. - Style cleanup: testing `expected' to decide on -EEXIST vs -ENOENT differentiates the callsites, but is a bit awkward to read. Test `entry' instead. Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> --- mm/shmem.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index afd5a057ebb7..00fb001e8f3e 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -638,7 +638,7 @@ static int shmem_add_to_page_cache(struct page *page, xas_lock_irq(&xas); entry = xas_find_conflict(&xas); if (entry != expected) - xas_set_err(&xas, expected ? -ENOENT : -EEXIST); + xas_set_err(&xas, entry ? -EEXIST : -ENOENT); xas_create_range(&xas); if (xas_error(&xas)) goto unlock; @@ -1686,10 +1686,13 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, * We already confirmed swap under page lock, but * free_swap_and_cache() only trylocks a page, so it * is just possible that the entry has been truncated - * or holepunched since swap was confirmed. - * shmem_undo_range() will have done some of the - * unaccounting, now delete_from_swap_cache() will do - * the rest. + * or holepunched since swap was confirmed. This could + * occur at any time while the page is locked, and + * usually page reclaim will take care of the stranded + * swapcache page. But when we catch it, we may as + * well clean up after ourselves: shmem_undo_range() + * will have done some of the unaccounting, now + * delete_from_swap_cache() will do the rest. */ if (error == -ENOENT) delete_from_swap_cache(page); @@ -1765,7 +1768,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, if (xa_is_value(page)) { error = shmem_swapin_page(inode, index, &page, sgp, gfp, vma, fault_type); - if (error == -EEXIST) + if (error == -EEXIST || error == -ENOENT) goto repeat; *pagep = page;
On Mon, 11 May 2020, Johannes Weiner wrote: > On Mon, May 11, 2020 at 12:38:04AM -0700, Hugh Dickins wrote: > > On Fri, 8 May 2020, Johannes Weiner wrote: > > > > > > I looked at this some more, as well as compared it to non-shmem > > > swapping. My conclusion is - and Hugh may correct me on this - that > > > the deletion looks mandatory but is actually an optimization. Page > > > reclaim will ultimately pick these pages up. > > > > > > When non-shmem pages are swapped in by readahead (locked until IO > > > completes) and their page tables are simultaneously unmapped, the > > > zap_pte_range() code calls free_swap_and_cache() and the locked pages > > > are stranded in the swap cache with no page table references. We rely > > > on page reclaim to pick them up later on. > > > > > > The same appears to be true for shmem. If the references to the swap > > > page are zapped while we're trying to swap in, we can strand the page > > > in the swap cache. But it's not up to swapin to detect this reliably, > > > it just frees the page more quickly than having to wait for reclaim. > > > > I think you've got all that exactly right, thanks for working it out. > > It originates from v3.7's 215c02bc33bb ("tmpfs: fix shmem_getpage_gfp() > > VM_BUG_ON") - in which I also had to thank you. > > I should have looked where it actually came from - I had forgotten > about that patch! > > > I think I chose to do the delete_from_swap_cache() right there, partly > > because of following shmem_unuse_inode() code which already did that, > > partly on the basis that while we have to observe the case then it's > > better to clean it up, and partly out of guilt that our page lock here > > is what had prevented shmem_undo_range() from completing its job; but > > I believe you're right that unused swapcache reclaim would sort it out > > eventually. > > That makes sense to me. > > > > diff --git a/mm/shmem.c b/mm/shmem.c > > > index e80167927dce..236642775f89 100644 > > > --- a/mm/shmem.c > > > +++ b/mm/shmem.c > > > @@ -640,7 +640,7 @@ static int shmem_add_to_page_cache(struct page *page, > > > xas_lock_irq(&xas); > > > entry = xas_find_conflict(&xas); > > > if (entry != expected) > > > - xas_set_err(&xas, -EEXIST); > > > + xas_set_err(&xas, expected ? -ENOENT : -EEXIST); > > > > Two things on this. > > > > Minor matter of taste, I'd prefer that as > > xas_set_err(&xas, entry ? -EEXIST : -ENOENT); > > which would be more general and more understandable - > > but what you have written should be fine for the actual callers. > > Yes, checking `expected' was to differentiate the behavior depending > on the callsite. But testing `entry' is more obvious in that location. > > > Except... I think returning -ENOENT there will not work correctly, > > in the case of a punched hole. Because (unless you've reworked it > > and I just haven't looked) shmem_getpage_gfp() knows to retry in > > the case of -EEXIST, but -ENOENT will percolate up to shmem_fault() > > and result in a SIGBUS, or a read/write error, when the hole should > > just get refilled instead. > > Good catch, I had indeed missed that. I'm going to make it retry on > -ENOENT as well. > > We could have it go directly to allocating a new page, but it seems > unnecessarily complicated: we've already been retrying in this > situation until now, so I would stick to "there was a race, retry." > > > Not something that needs fixing in a hurry (it took trinity to > > generate this racy case in the first place), I'll take another look > > once I've pulled it into a tree (or collected next mmotm) - unless > > you've already have changed it around by then. > > Attaching a delta fix based on your observations. > > Andrew, barring any objections to this, could you please fold it into > the version you have in your tree already? Not so strong as an objection, and I won't get to see whether your retry on -ENOENT is good (can -ENOENT arrive at that point from any other case, that might endlessly retry?) until I've got the full context; but I had arrived at the opposite conclusion overnight. Given that this case only appeared with a fuzzer, and stale swapcache reclaim is anyway relied upon to clean up after plenty of other such races, I think we should agree that I over-complicated the VM_BUG_ON removal originally, and it's best to kill that delete_from_swap_cache(), and the comment having to explain it, and your EEXIST/ENOENT distinction. (I haven't checked, but I suspect that the shmem_unuse_inode() case that I copied from, actually really needed to delete_from_swap_cache(), in order to swapoff the page without full retry of the big swapoff loop.) Hugh > > --- > > From 33d03ceebce0a6261d472ddc9c5a07940f44714c Mon Sep 17 00:00:00 2001 > From: Johannes Weiner <hannes@cmpxchg.org> > Date: Mon, 11 May 2020 10:45:14 -0400 > Subject: [PATCH] mm: memcontrol: convert page cache to a new > mem_cgroup_charge() API fix > > Incorporate Hugh's feedback: > > - shmem_getpage_gfp() needs to handle the new -ENOENT that was > previously implied in the -EEXIST when a swap entry changed under us > in any way. Otherwise hole punching could cause a racing fault to > SIGBUS instead of allocating a new page. > > - It is indeed page reclaim that picks up any swapcache we leave > stranded when free_swap_and_cache() runs on a page locked by > somebody else. Document that our delete_from_swap_cache() is an > optimization, not something we rely on for correctness. > > - Style cleanup: testing `expected' to decide on -EEXIST vs -ENOENT > differentiates the callsites, but is a bit awkward to read. Test > `entry' instead. > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> > --- > mm/shmem.c | 15 +++++++++------ > 1 file changed, 9 insertions(+), 6 deletions(-) > > diff --git a/mm/shmem.c b/mm/shmem.c > index afd5a057ebb7..00fb001e8f3e 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -638,7 +638,7 @@ static int shmem_add_to_page_cache(struct page *page, > xas_lock_irq(&xas); > entry = xas_find_conflict(&xas); > if (entry != expected) > - xas_set_err(&xas, expected ? -ENOENT : -EEXIST); > + xas_set_err(&xas, entry ? -EEXIST : -ENOENT); > xas_create_range(&xas); > if (xas_error(&xas)) > goto unlock; > @@ -1686,10 +1686,13 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, > * We already confirmed swap under page lock, but > * free_swap_and_cache() only trylocks a page, so it > * is just possible that the entry has been truncated > - * or holepunched since swap was confirmed. > - * shmem_undo_range() will have done some of the > - * unaccounting, now delete_from_swap_cache() will do > - * the rest. > + * or holepunched since swap was confirmed. This could > + * occur at any time while the page is locked, and > + * usually page reclaim will take care of the stranded > + * swapcache page. But when we catch it, we may as > + * well clean up after ourselves: shmem_undo_range() > + * will have done some of the unaccounting, now > + * delete_from_swap_cache() will do the rest. > */ > if (error == -ENOENT) > delete_from_swap_cache(page); > @@ -1765,7 +1768,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, > if (xa_is_value(page)) { > error = shmem_swapin_page(inode, index, &page, > sgp, gfp, vma, fault_type); > - if (error == -EEXIST) > + if (error == -EEXIST || error == -ENOENT) > goto repeat; > > *pagep = page; > -- > 2.26.2 >
On Mon, May 11, 2020 at 09:32:16AM -0700, Hugh Dickins wrote: > On Mon, 11 May 2020, Johannes Weiner wrote: > > On Mon, May 11, 2020 at 12:38:04AM -0700, Hugh Dickins wrote: > > > On Fri, 8 May 2020, Johannes Weiner wrote: > > > > > > > > I looked at this some more, as well as compared it to non-shmem > > > > swapping. My conclusion is - and Hugh may correct me on this - that > > > > the deletion looks mandatory but is actually an optimization. Page > > > > reclaim will ultimately pick these pages up. > > > > > > > > When non-shmem pages are swapped in by readahead (locked until IO > > > > completes) and their page tables are simultaneously unmapped, the > > > > zap_pte_range() code calls free_swap_and_cache() and the locked pages > > > > are stranded in the swap cache with no page table references. We rely > > > > on page reclaim to pick them up later on. > > > > > > > > The same appears to be true for shmem. If the references to the swap > > > > page are zapped while we're trying to swap in, we can strand the page > > > > in the swap cache. But it's not up to swapin to detect this reliably, > > > > it just frees the page more quickly than having to wait for reclaim. > > > > > > I think you've got all that exactly right, thanks for working it out. > > > It originates from v3.7's 215c02bc33bb ("tmpfs: fix shmem_getpage_gfp() > > > VM_BUG_ON") - in which I also had to thank you. > > > > I should have looked where it actually came from - I had forgotten > > about that patch! > > > > > I think I chose to do the delete_from_swap_cache() right there, partly > > > because of following shmem_unuse_inode() code which already did that, > > > partly on the basis that while we have to observe the case then it's > > > better to clean it up, and partly out of guilt that our page lock here > > > is what had prevented shmem_undo_range() from completing its job; but > > > I believe you're right that unused swapcache reclaim would sort it out > > > eventually. > > > > That makes sense to me. > > > > > > diff --git a/mm/shmem.c b/mm/shmem.c > > > > index e80167927dce..236642775f89 100644 > > > > --- a/mm/shmem.c > > > > +++ b/mm/shmem.c > > > > @@ -640,7 +640,7 @@ static int shmem_add_to_page_cache(struct page *page, > > > > xas_lock_irq(&xas); > > > > entry = xas_find_conflict(&xas); > > > > if (entry != expected) > > > > - xas_set_err(&xas, -EEXIST); > > > > + xas_set_err(&xas, expected ? -ENOENT : -EEXIST); > > > > > > Two things on this. > > > > > > Minor matter of taste, I'd prefer that as > > > xas_set_err(&xas, entry ? -EEXIST : -ENOENT); > > > which would be more general and more understandable - > > > but what you have written should be fine for the actual callers. > > > > Yes, checking `expected' was to differentiate the behavior depending > > on the callsite. But testing `entry' is more obvious in that location. > > > > > Except... I think returning -ENOENT there will not work correctly, > > > in the case of a punched hole. Because (unless you've reworked it > > > and I just haven't looked) shmem_getpage_gfp() knows to retry in > > > the case of -EEXIST, but -ENOENT will percolate up to shmem_fault() > > > and result in a SIGBUS, or a read/write error, when the hole should > > > just get refilled instead. > > > > Good catch, I had indeed missed that. I'm going to make it retry on > > -ENOENT as well. > > > > We could have it go directly to allocating a new page, but it seems > > unnecessarily complicated: we've already been retrying in this > > situation until now, so I would stick to "there was a race, retry." > > > > > Not something that needs fixing in a hurry (it took trinity to > > > generate this racy case in the first place), I'll take another look > > > once I've pulled it into a tree (or collected next mmotm) - unless > > > you've already have changed it around by then. > > > > Attaching a delta fix based on your observations. > > > > Andrew, barring any objections to this, could you please fold it into > > the version you have in your tree already? > > Not so strong as an objection, and I won't get to see whether your > retry on -ENOENT is good (can -ENOENT arrive at that point from any > other case, that might endlessly retry?) until I've got the full > context; but I had arrived at the opposite conclusion overnight. > > Given that this case only appeared with a fuzzer, and stale swapcache > reclaim is anyway relied upon to clean up after plenty of other such > races, I think we should agree that I over-complicated the VM_BUG_ON > removal originally, and it's best to kill that delete_from_swap_cache(), > and the comment having to explain it, and your EEXIST/ENOENT distinction. > > (I haven't checked, but I suspect that the shmem_unuse_inode() case > that I copied from, actually really needed to delete_from_swap_cache(), > in order to swapoff the page without full retry of the big swapoff loop.) Since commit b56a2d8af914 ("mm: rid swapoff of quadratic complexity"), shmem_unuse_inode() doesn't have its own copy anymore - it uses shmem_swapin_page(). However, that commit appears to have made shmem's private call to delete_from_swap_cache() obsolete as well. Whereas before this change we fully relied on shmem_unuse() to find and clear a shmem swap entry and its swapcache page, we now only need it to clean out shmem's private state in the inode, as it's followed by a loop over all remaining swap slots, calling try_to_free_swap() on stragglers. Unless I missed something, it's still merely an optimization, and we can delete it for simplicity: --- From fc9dcaf68c8b54baf365cd670fb5780c7f0d243f Mon Sep 17 00:00:00 2001 From: Johannes Weiner <hannes@cmpxchg.org> Date: Mon, 11 May 2020 12:59:08 -0400 Subject: [PATCH] mm: shmem: remove rare optimization when swapin races with hole punching Commit 215c02bc33bb ("tmpfs: fix shmem_getpage_gfp() VM_BUG_ON") recognized that hole punching can race with swapin and removed the BUG_ON() for a truncated entry from the swapin path. The patch also added a swapcache deletion to optimize this rare case: Since swapin has the page locked, and free_swap_and_cache() merely trylocks, this situation can leave the page stranded in swapcache. Usually, page reclaim picks up stale swapcache pages, and the race can happen at any other time when the page is locked. (The same happens for non-shmem swapin racing with page table zapping.) The thinking here was: we already observed the race and we have the page locked, we may as well do the cleanup instead of waiting for reclaim. However, this optimization complicates the next patch which moves the cgroup charging code around. As this is just a minor speedup for a race condition that is so rare that it required a fuzzer to trigger the original BUG_ON(), it's no longer worth the complications. Suggested-by: Hugh Dickins <hughd@google.com> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> --- mm/shmem.c | 25 +++++++------------------ 1 file changed, 7 insertions(+), 18 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index d505b6cce4ab..729bbb3513cd 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1665,27 +1665,16 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, } error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg); - if (!error) { - error = shmem_add_to_page_cache(page, mapping, index, - swp_to_radix_entry(swap), gfp); - /* - * We already confirmed swap under page lock, and make - * no memory allocation here, so usually no possibility - * of error; but free_swap_and_cache() only trylocks a - * page, so it is just possible that the entry has been - * truncated or holepunched since swap was confirmed. - * shmem_undo_range() will have done some of the - * unaccounting, now delete_from_swap_cache() will do - * the rest. - */ - if (error) { - mem_cgroup_cancel_charge(page, memcg); - delete_from_swap_cache(page); - } - } if (error) goto failed; + error = shmem_add_to_page_cache(page, mapping, index, + swp_to_radix_entry(swap), gfp); + if (error) { + mem_cgroup_cancel_charge(page, memcg); + goto failed; + } + mem_cgroup_commit_charge(page, memcg, true); spin_lock_irq(&info->lock);
On Mon, May 11, 2020 at 02:10:58PM -0400, Johannes Weiner wrote: > From fc9dcaf68c8b54baf365cd670fb5780c7f0d243f Mon Sep 17 00:00:00 2001 > From: Johannes Weiner <hannes@cmpxchg.org> > Date: Mon, 11 May 2020 12:59:08 -0400 > Subject: [PATCH] mm: shmem: remove rare optimization when swapin races with > hole punching And a new, conflict-resolved version of the patch this thread is attached to: --- From 7f630d9bc5d6f692298fd906edd5f48070b257c7 Mon Sep 17 00:00:00 2001 From: Johannes Weiner <hannes@cmpxchg.org> Date: Thu, 16 Apr 2020 15:08:07 -0400 Subject: [PATCH] mm: memcontrol: convert page cache to a new mem_cgroup_charge() API The try/commit/cancel protocol that memcg uses dates back to when pages used to be uncharged upon removal from the page cache, and thus couldn't be committed before the insertion had succeeded. Nowadays, pages are uncharged when they are physically freed; it doesn't matter whether the insertion was successful or not. For the page cache, the transaction dance has become unnecessary. Introduce a mem_cgroup_charge() function that simply charges a newly allocated page to a cgroup and sets up page->mem_cgroup in one single step. If the insertion fails, the caller doesn't have to do anything but free/put the page. Then switch the page cache over to this new API. Subsequent patches will also convert anon pages, but it needs a bit more prep work. Right now, memcg depends on page->mapping being already set up at the time of charging, so that it can maintain its own MEMCG_CACHE and MEMCG_RSS counters. For anon, page->mapping is set under the same pte lock under which the page is publishd, so a single charge point that can block doesn't work there just yet. The following prep patches will replace the private memcg counters with the generic vmstat counters, thus removing the page->mapping dependency, then complete the transition to the new single-point charge API and delete the old transactional scheme. v2: leave shmem swapcache when charging fails to avoid double IO (Joonsoo) v3: rebase on preceeding shmem simplification patch Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Alex Shi <alex.shi@linux.alibaba.com> --- include/linux/memcontrol.h | 10 ++++++ mm/filemap.c | 24 ++++++------- mm/memcontrol.c | 29 +++++++++++++-- mm/shmem.c | 73 ++++++++++++++++---------------------- 4 files changed, 77 insertions(+), 59 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 30292d57c8af..57339514d960 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -379,6 +379,10 @@ int mem_cgroup_try_charge_delay(struct page *page, struct mm_struct *mm, void mem_cgroup_commit_charge(struct page *page, struct mem_cgroup *memcg, bool lrucare); void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memcg); + +int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, + bool lrucare); + void mem_cgroup_uncharge(struct page *page); void mem_cgroup_uncharge_list(struct list_head *page_list); @@ -893,6 +897,12 @@ static inline void mem_cgroup_cancel_charge(struct page *page, { } +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm, + gfp_t gfp_mask, bool lrucare) +{ + return 0; +} + static inline void mem_cgroup_uncharge(struct page *page) { } diff --git a/mm/filemap.c b/mm/filemap.c index ce200386736c..ee9882509566 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -832,7 +832,6 @@ static int __add_to_page_cache_locked(struct page *page, { XA_STATE(xas, &mapping->i_pages, offset); int huge = PageHuge(page); - struct mem_cgroup *memcg; int error; void *old; @@ -840,17 +839,16 @@ static int __add_to_page_cache_locked(struct page *page, VM_BUG_ON_PAGE(PageSwapBacked(page), page); mapping_set_update(&xas, mapping); - if (!huge) { - error = mem_cgroup_try_charge(page, current->mm, - gfp_mask, &memcg); - if (error) - return error; - } - get_page(page); page->mapping = mapping; page->index = offset; + if (!huge) { + error = mem_cgroup_charge(page, current->mm, gfp_mask, false); + if (error) + goto error; + } + do { xas_lock_irq(&xas); old = xas_load(&xas); @@ -874,20 +872,18 @@ static int __add_to_page_cache_locked(struct page *page, xas_unlock_irq(&xas); } while (xas_nomem(&xas, gfp_mask & GFP_RECLAIM_MASK)); - if (xas_error(&xas)) + if (xas_error(&xas)) { + error = xas_error(&xas); goto error; + } - if (!huge) - mem_cgroup_commit_charge(page, memcg, false); trace_mm_filemap_add_to_page_cache(page); return 0; error: page->mapping = NULL; /* Leave page->index set: truncation relies upon it */ - if (!huge) - mem_cgroup_cancel_charge(page, memcg); put_page(page); - return xas_error(&xas); + return error; } ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, ERRNO); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 8188d462d7ce..1d45a09b334f 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6578,6 +6578,33 @@ void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memcg) cancel_charge(memcg, nr_pages); } +/** + * mem_cgroup_charge - charge a newly allocated page to a cgroup + * @page: page to charge + * @mm: mm context of the victim + * @gfp_mask: reclaim mode + * @lrucare: page might be on the LRU already + * + * Try to charge @page to the memcg that @mm belongs to, reclaiming + * pages according to @gfp_mask if necessary. + * + * Returns 0 on success. Otherwise, an error code is returned. + */ +int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, + bool lrucare) +{ + struct mem_cgroup *memcg; + int ret; + + VM_BUG_ON_PAGE(!page->mapping, page); + + ret = mem_cgroup_try_charge(page, mm, gfp_mask, &memcg); + if (ret) + return ret; + mem_cgroup_commit_charge(page, memcg, lrucare); + return 0; +} + struct uncharge_gather { struct mem_cgroup *memcg; unsigned long pgpgout; @@ -6625,8 +6652,6 @@ static void uncharge_batch(const struct uncharge_gather *ug) static void uncharge_page(struct page *page, struct uncharge_gather *ug) { VM_BUG_ON_PAGE(PageLRU(page), page); - VM_BUG_ON_PAGE(page_count(page) && !is_zone_device_page(page) && - !PageHWPoison(page) , page); if (!page->mem_cgroup) return; diff --git a/mm/shmem.c b/mm/shmem.c index 729bbb3513cd..0d9615723152 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -605,11 +605,13 @@ static inline bool is_huge_enabled(struct shmem_sb_info *sbinfo) */ static int shmem_add_to_page_cache(struct page *page, struct address_space *mapping, - pgoff_t index, void *expected, gfp_t gfp) + pgoff_t index, void *expected, gfp_t gfp, + struct mm_struct *charge_mm) { XA_STATE_ORDER(xas, &mapping->i_pages, index, compound_order(page)); unsigned long i = 0; unsigned long nr = compound_nr(page); + int error; VM_BUG_ON_PAGE(PageTail(page), page); VM_BUG_ON_PAGE(index != round_down(index, nr), page); @@ -621,6 +623,16 @@ static int shmem_add_to_page_cache(struct page *page, page->mapping = mapping; page->index = index; + error = mem_cgroup_charge(page, charge_mm, gfp, PageSwapCache(page)); + if (error) { + if (!PageSwapCache(page) && PageTransHuge(page)) { + count_vm_event(THP_FILE_FALLBACK); + count_vm_event(THP_FILE_FALLBACK_CHARGE); + } + goto error; + } + cgroup_throttle_swaprate(page, gfp); + do { void *entry; xas_lock_irq(&xas); @@ -648,12 +660,15 @@ static int shmem_add_to_page_cache(struct page *page, } while (xas_nomem(&xas, gfp)); if (xas_error(&xas)) { - page->mapping = NULL; - page_ref_sub(page, nr); - return xas_error(&xas); + error = xas_error(&xas); + goto error; } return 0; +error: + page->mapping = NULL; + page_ref_sub(page, nr); + return error; } /* @@ -1619,7 +1634,6 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, struct address_space *mapping = inode->i_mapping; struct shmem_inode_info *info = SHMEM_I(inode); struct mm_struct *charge_mm = vma ? vma->vm_mm : current->mm; - struct mem_cgroup *memcg; struct page *page; swp_entry_t swap; int error; @@ -1664,18 +1678,11 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, goto failed; } - error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg); - if (error) - goto failed; - error = shmem_add_to_page_cache(page, mapping, index, - swp_to_radix_entry(swap), gfp); - if (error) { - mem_cgroup_cancel_charge(page, memcg); + swp_to_radix_entry(swap), gfp, + charge_mm); + if (error) goto failed; - } - - mem_cgroup_commit_charge(page, memcg, true); spin_lock_irq(&info->lock); info->swapped--; @@ -1722,7 +1729,6 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, struct shmem_inode_info *info = SHMEM_I(inode); struct shmem_sb_info *sbinfo; struct mm_struct *charge_mm; - struct mem_cgroup *memcg; struct page *page; enum sgp_type sgp_huge = sgp; pgoff_t hindex = index; @@ -1847,21 +1853,11 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, if (sgp == SGP_WRITE) __SetPageReferenced(page); - error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg); - if (error) { - if (PageTransHuge(page)) { - count_vm_event(THP_FILE_FALLBACK); - count_vm_event(THP_FILE_FALLBACK_CHARGE); - } - goto unacct; - } error = shmem_add_to_page_cache(page, mapping, hindex, - NULL, gfp & GFP_RECLAIM_MASK); - if (error) { - mem_cgroup_cancel_charge(page, memcg); + NULL, gfp & GFP_RECLAIM_MASK, + charge_mm); + if (error) goto unacct; - } - mem_cgroup_commit_charge(page, memcg, false); lru_cache_add_anon(page); spin_lock_irq(&info->lock); @@ -2299,7 +2295,6 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, struct address_space *mapping = inode->i_mapping; gfp_t gfp = mapping_gfp_mask(mapping); pgoff_t pgoff = linear_page_index(dst_vma, dst_addr); - struct mem_cgroup *memcg; spinlock_t *ptl; void *page_kaddr; struct page *page; @@ -2349,16 +2344,10 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, if (unlikely(offset >= max_off)) goto out_release; - ret = mem_cgroup_try_charge_delay(page, dst_mm, gfp, &memcg); - if (ret) - goto out_release; - ret = shmem_add_to_page_cache(page, mapping, pgoff, NULL, - gfp & GFP_RECLAIM_MASK); + gfp & GFP_RECLAIM_MASK, dst_mm); if (ret) - goto out_release_uncharge; - - mem_cgroup_commit_charge(page, memcg, false); + goto out_release; _dst_pte = mk_pte(page, dst_vma->vm_page_prot); if (dst_vma->vm_flags & VM_WRITE) @@ -2379,11 +2368,11 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, ret = -EFAULT; max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); if (unlikely(offset >= max_off)) - goto out_release_uncharge_unlock; + goto out_release_unlock; ret = -EEXIST; if (!pte_none(*dst_pte)) - goto out_release_uncharge_unlock; + goto out_release_unlock; lru_cache_add_anon(page); @@ -2404,12 +2393,10 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, ret = 0; out: return ret; -out_release_uncharge_unlock: +out_release_unlock: pte_unmap_unlock(dst_pte, ptl); ClearPageDirty(page); delete_from_page_cache(page); -out_release_uncharge: - mem_cgroup_cancel_charge(page, memcg); out_release: unlock_page(page); put_page(page);
On Mon, 11 May 2020, Johannes Weiner wrote: > > Since commit b56a2d8af914 ("mm: rid swapoff of quadratic complexity"), > shmem_unuse_inode() doesn't have its own copy anymore - it uses > shmem_swapin_page(). > > However, that commit appears to have made shmem's private call to > delete_from_swap_cache() obsolete as well. Whereas before this change > we fully relied on shmem_unuse() to find and clear a shmem swap entry > and its swapcache page, we now only need it to clean out shmem's > private state in the inode, as it's followed by a loop over all > remaining swap slots, calling try_to_free_swap() on stragglers. Great, you've looked deeper into the current situation than I had. > > Unless I missed something, it's still merely an optimization, and we > can delete it for simplicity: Yes, nice ---s, simpler code, and a good idea to separate it out as a precursor: thanks, Hannes. > > --- > > From fc9dcaf68c8b54baf365cd670fb5780c7f0d243f Mon Sep 17 00:00:00 2001 > From: Johannes Weiner <hannes@cmpxchg.org> > Date: Mon, 11 May 2020 12:59:08 -0400 > Subject: [PATCH] mm: shmem: remove rare optimization when swapin races with > hole punching > > Commit 215c02bc33bb ("tmpfs: fix shmem_getpage_gfp() VM_BUG_ON") > recognized that hole punching can race with swapin and removed the > BUG_ON() for a truncated entry from the swapin path. > > The patch also added a swapcache deletion to optimize this rare case: > Since swapin has the page locked, and free_swap_and_cache() merely > trylocks, this situation can leave the page stranded in > swapcache. Usually, page reclaim picks up stale swapcache pages, and > the race can happen at any other time when the page is locked. (The > same happens for non-shmem swapin racing with page table zapping.) The > thinking here was: we already observed the race and we have the page > locked, we may as well do the cleanup instead of waiting for reclaim. > > However, this optimization complicates the next patch which moves the > cgroup charging code around. As this is just a minor speedup for a > race condition that is so rare that it required a fuzzer to trigger > the original BUG_ON(), it's no longer worth the complications. > > Suggested-by: Hugh Dickins <hughd@google.com> > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Hugh Dickins <hughd@google.com> (if one is allowed to suggest and to ack) > --- > mm/shmem.c | 25 +++++++------------------ > 1 file changed, 7 insertions(+), 18 deletions(-) > > diff --git a/mm/shmem.c b/mm/shmem.c > index d505b6cce4ab..729bbb3513cd 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -1665,27 +1665,16 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, > } > > error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg); > - if (!error) { > - error = shmem_add_to_page_cache(page, mapping, index, > - swp_to_radix_entry(swap), gfp); > - /* > - * We already confirmed swap under page lock, and make > - * no memory allocation here, so usually no possibility > - * of error; but free_swap_and_cache() only trylocks a > - * page, so it is just possible that the entry has been > - * truncated or holepunched since swap was confirmed. > - * shmem_undo_range() will have done some of the > - * unaccounting, now delete_from_swap_cache() will do > - * the rest. > - */ > - if (error) { > - mem_cgroup_cancel_charge(page, memcg); > - delete_from_swap_cache(page); > - } > - } > if (error) > goto failed; > > + error = shmem_add_to_page_cache(page, mapping, index, > + swp_to_radix_entry(swap), gfp); > + if (error) { > + mem_cgroup_cancel_charge(page, memcg); > + goto failed; > + } > + > mem_cgroup_commit_charge(page, memcg, true); > > spin_lock_irq(&info->lock); > -- > 2.26.2 > >
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index c7875a48c8c1..5e8b0e38f145 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -367,6 +367,10 @@ int mem_cgroup_try_charge_delay(struct page *page, struct mm_struct *mm, void mem_cgroup_commit_charge(struct page *page, struct mem_cgroup *memcg, bool lrucare); void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memcg); + +int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, + bool lrucare); + void mem_cgroup_uncharge(struct page *page); void mem_cgroup_uncharge_list(struct list_head *page_list); @@ -872,6 +876,12 @@ static inline void mem_cgroup_cancel_charge(struct page *page, { } +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm, + gfp_t gfp_mask, bool lrucare) +{ + return 0; +} + static inline void mem_cgroup_uncharge(struct page *page) { } diff --git a/mm/filemap.c b/mm/filemap.c index 5b31af9d5b1b..5bdbda965177 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -832,7 +832,6 @@ static int __add_to_page_cache_locked(struct page *page, { XA_STATE(xas, &mapping->i_pages, offset); int huge = PageHuge(page); - struct mem_cgroup *memcg; int error; void *old; @@ -840,17 +839,16 @@ static int __add_to_page_cache_locked(struct page *page, VM_BUG_ON_PAGE(PageSwapBacked(page), page); mapping_set_update(&xas, mapping); - if (!huge) { - error = mem_cgroup_try_charge(page, current->mm, - gfp_mask, &memcg); - if (error) - return error; - } - get_page(page); page->mapping = mapping; page->index = offset; + if (!huge) { + error = mem_cgroup_charge(page, current->mm, gfp_mask, false); + if (error) + goto error; + } + do { xas_lock_irq(&xas); old = xas_load(&xas); @@ -874,20 +872,18 @@ static int __add_to_page_cache_locked(struct page *page, xas_unlock_irq(&xas); } while (xas_nomem(&xas, gfp_mask & GFP_RECLAIM_MASK)); - if (xas_error(&xas)) + if (xas_error(&xas)) { + error = xas_error(&xas); goto error; + } - if (!huge) - mem_cgroup_commit_charge(page, memcg, false); trace_mm_filemap_add_to_page_cache(page); return 0; error: page->mapping = NULL; /* Leave page->index set: truncation relies upon it */ - if (!huge) - mem_cgroup_cancel_charge(page, memcg); put_page(page); - return xas_error(&xas); + return error; } ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, ERRNO); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 711d6dd5cbb1..b38c0a672d26 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6577,6 +6577,33 @@ void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memcg) cancel_charge(memcg, nr_pages); } +/** + * mem_cgroup_charge - charge a newly allocated page to a cgroup + * @page: page to charge + * @mm: mm context of the victim + * @gfp_mask: reclaim mode + * @lrucare: page might be on the LRU already + * + * Try to charge @page to the memcg that @mm belongs to, reclaiming + * pages according to @gfp_mask if necessary. + * + * Returns 0 on success. Otherwise, an error code is returned. + */ +int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask, + bool lrucare) +{ + struct mem_cgroup *memcg; + int ret; + + VM_BUG_ON_PAGE(!page->mapping, page); + + ret = mem_cgroup_try_charge(page, mm, gfp_mask, &memcg); + if (ret) + return ret; + mem_cgroup_commit_charge(page, memcg, lrucare); + return 0; +} + struct uncharge_gather { struct mem_cgroup *memcg; unsigned long pgpgout; diff --git a/mm/shmem.c b/mm/shmem.c index 52c66801321e..2384f6c7ef71 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -605,11 +605,13 @@ static inline bool is_huge_enabled(struct shmem_sb_info *sbinfo) */ static int shmem_add_to_page_cache(struct page *page, struct address_space *mapping, - pgoff_t index, void *expected, gfp_t gfp) + pgoff_t index, void *expected, gfp_t gfp, + struct mm_struct *charge_mm) { XA_STATE_ORDER(xas, &mapping->i_pages, index, compound_order(page)); unsigned long i = 0; unsigned long nr = compound_nr(page); + int error; VM_BUG_ON_PAGE(PageTail(page), page); VM_BUG_ON_PAGE(index != round_down(index, nr), page); @@ -621,6 +623,16 @@ static int shmem_add_to_page_cache(struct page *page, page->mapping = mapping; page->index = index; + error = mem_cgroup_charge(page, charge_mm, gfp, PageSwapCache(page)); + if (error) { + if (!PageSwapCache(page) && PageTransHuge(page)) { + count_vm_event(THP_FILE_FALLBACK); + count_vm_event(THP_FILE_FALLBACK_CHARGE); + } + goto error; + } + cgroup_throttle_swaprate(page, gfp); + do { void *entry; xas_lock_irq(&xas); @@ -648,12 +660,15 @@ static int shmem_add_to_page_cache(struct page *page, } while (xas_nomem(&xas, gfp)); if (xas_error(&xas)) { - page->mapping = NULL; - page_ref_sub(page, nr); - return xas_error(&xas); + error = xas_error(&xas); + goto error; } return 0; +error: + page->mapping = NULL; + page_ref_sub(page, nr); + return error; } /* @@ -1619,7 +1634,6 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, struct address_space *mapping = inode->i_mapping; struct shmem_inode_info *info = SHMEM_I(inode); struct mm_struct *charge_mm = vma ? vma->vm_mm : current->mm; - struct mem_cgroup *memcg; struct page *page; swp_entry_t swap; int error; @@ -1664,29 +1678,22 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, goto failed; } - error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg); - if (!error) { - error = shmem_add_to_page_cache(page, mapping, index, - swp_to_radix_entry(swap), gfp); - /* - * We already confirmed swap under page lock, and make - * no memory allocation here, so usually no possibility - * of error; but free_swap_and_cache() only trylocks a - * page, so it is just possible that the entry has been - * truncated or holepunched since swap was confirmed. - * shmem_undo_range() will have done some of the - * unaccounting, now delete_from_swap_cache() will do - * the rest. - */ - if (error) { - mem_cgroup_cancel_charge(page, memcg); - delete_from_swap_cache(page); - } - } - if (error) + error = shmem_add_to_page_cache(page, mapping, index, + swp_to_radix_entry(swap), gfp, + charge_mm); + /* + * We already confirmed swap under page lock, and make no + * memory allocation here, so usually no possibility of error; + * but free_swap_and_cache() only trylocks a page, so it is + * just possible that the entry has been truncated or + * holepunched since swap was confirmed. shmem_undo_range() + * will have done some of the unaccounting, now + * delete_from_swap_cache() will do the rest. + */ + if (error) { + delete_from_swap_cache(page); goto failed; - - mem_cgroup_commit_charge(page, memcg, true); + } spin_lock_irq(&info->lock); info->swapped--; @@ -1733,7 +1740,6 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, struct shmem_inode_info *info = SHMEM_I(inode); struct shmem_sb_info *sbinfo; struct mm_struct *charge_mm; - struct mem_cgroup *memcg; struct page *page; enum sgp_type sgp_huge = sgp; pgoff_t hindex = index; @@ -1858,21 +1864,11 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, if (sgp == SGP_WRITE) __SetPageReferenced(page); - error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg); - if (error) { - if (PageTransHuge(page)) { - count_vm_event(THP_FILE_FALLBACK); - count_vm_event(THP_FILE_FALLBACK_CHARGE); - } - goto unacct; - } error = shmem_add_to_page_cache(page, mapping, hindex, - NULL, gfp & GFP_RECLAIM_MASK); - if (error) { - mem_cgroup_cancel_charge(page, memcg); + NULL, gfp & GFP_RECLAIM_MASK, + charge_mm); + if (error) goto unacct; - } - mem_cgroup_commit_charge(page, memcg, false); lru_cache_add_anon(page); spin_lock_irq(&info->lock); @@ -2307,7 +2303,6 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, struct address_space *mapping = inode->i_mapping; gfp_t gfp = mapping_gfp_mask(mapping); pgoff_t pgoff = linear_page_index(dst_vma, dst_addr); - struct mem_cgroup *memcg; spinlock_t *ptl; void *page_kaddr; struct page *page; @@ -2357,16 +2352,10 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, if (unlikely(offset >= max_off)) goto out_release; - ret = mem_cgroup_try_charge_delay(page, dst_mm, gfp, &memcg); - if (ret) - goto out_release; - ret = shmem_add_to_page_cache(page, mapping, pgoff, NULL, - gfp & GFP_RECLAIM_MASK); + gfp & GFP_RECLAIM_MASK, dst_mm); if (ret) - goto out_release_uncharge; - - mem_cgroup_commit_charge(page, memcg, false); + goto out_release; _dst_pte = mk_pte(page, dst_vma->vm_page_prot); if (dst_vma->vm_flags & VM_WRITE) @@ -2387,11 +2376,11 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, ret = -EFAULT; max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); if (unlikely(offset >= max_off)) - goto out_release_uncharge_unlock; + goto out_release_unlock; ret = -EEXIST; if (!pte_none(*dst_pte)) - goto out_release_uncharge_unlock; + goto out_release_unlock; lru_cache_add_anon(page); @@ -2412,12 +2401,10 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, ret = 0; out: return ret; -out_release_uncharge_unlock: +out_release_unlock: pte_unmap_unlock(dst_pte, ptl); ClearPageDirty(page); delete_from_page_cache(page); -out_release_uncharge: - mem_cgroup_cancel_charge(page, memcg); out_release: unlock_page(page); put_page(page);
The try/commit/cancel protocol that memcg uses dates back to when pages used to be uncharged upon removal from the page cache, and thus couldn't be committed before the insertion had succeeded. Nowadays, pages are uncharged when they are physically freed; it doesn't matter whether the insertion was successful or not. For the page cache, the transaction dance has become unnecessary. Introduce a mem_cgroup_charge() function that simply charges a newly allocated page to a cgroup and sets up page->mem_cgroup in one single step. If the insertion fails, the caller doesn't have to do anything but free/put the page. Then switch the page cache over to this new API. Subsequent patches will also convert anon pages, but it needs a bit more prep work. Right now, memcg depends on page->mapping being already set up at the time of charging, so that it can maintain its own MEMCG_CACHE and MEMCG_RSS counters. For anon, page->mapping is set under the same pte lock under which the page is publishd, so a single charge point that can block doesn't work there just yet. The following prep patches will replace the private memcg counters with the generic vmstat counters, thus removing the page->mapping dependency, then complete the transition to the new single-point charge API and delete the old transactional scheme. Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> --- include/linux/memcontrol.h | 10 ++++ mm/filemap.c | 24 ++++------ mm/memcontrol.c | 27 +++++++++++ mm/shmem.c | 97 +++++++++++++++++--------------------- 4 files changed, 89 insertions(+), 69 deletions(-)