From patchwork Mon Apr 27 07:02:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alex Shi X-Patchwork-Id: 11511259 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8AAE513B2 for ; Mon, 27 Apr 2020 07:03:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 53BCD217BA for ; Mon, 27 Apr 2020 07:03:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 53BCD217BA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 980AF8E0006; Mon, 27 Apr 2020 03:03:43 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 932858E0001; Mon, 27 Apr 2020 03:03:43 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7FB578E0006; Mon, 27 Apr 2020 03:03:43 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0151.hostedemail.com [216.40.44.151]) by kanga.kvack.org (Postfix) with ESMTP id 682818E0001 for ; Mon, 27 Apr 2020 03:03:43 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 1D3E91EF3 for ; Mon, 27 Apr 2020 07:03:43 +0000 (UTC) X-FDA: 76752744726.11.pain67_2e270251c119 X-Spam-Summary: 2,0,0,8fd5020db6fcc363,d41d8cd98f00b204,alex.shi@linux.alibaba.com,,RULES_HIT:2:41:69:355:379:541:800:960:973:988:989:1260:1261:1345:1359:1431:1437:1535:1605:1730:1747:1777:1792:2198:2199:2393:2559:2562:2898:3138:3139:3140:3141:3142:3865:3867:3870:3871:3872:4051:4120:4250:4321:4605:5007:6119:6120:6261:6737:7875:7901:7903:9036:10004:11026:11232:11473:11658:11914:12043:12048:12296:12297:12438:12555:12895:12986:13846:14096:14394:14915:21060:21080:21324:21451:21627:21740:21990:30003:30054:30070,0,RBL:115.124.30.42:@linux.alibaba.com:.lbl8.mailshell.net-62.20.2.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: pain67_2e270251c119 X-Filterd-Recvd-Size: 9833 Received: from out30-42.freemail.mail.aliyun.com (out30-42.freemail.mail.aliyun.com [115.124.30.42]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Mon, 27 Apr 2020 07:03:39 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R151e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04394;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=20;SR=0;TI=SMTPD_---0TwlGiqA_1587971012; Received: from localhost(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0TwlGiqA_1587971012) by smtp.aliyun-inc.com(127.0.0.1); Mon, 27 Apr 2020 15:03:33 +0800 From: Alex Shi To: akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, shakeelb@google.com, iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com Cc: Alex Shi , Seth Jennings , Dan Streetman , Vitaly Wool Subject: [PATCH v10 01/15] mm/swap: use vmf clean up swapin funcs parameters Date: Mon, 27 Apr 2020 15:02:50 +0800 Message-Id: <1587970985-21629-2-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1587970985-21629-1-git-send-email-alex.shi@linux.alibaba.com> References: <1587970985-21629-1-git-send-email-alex.shi@linux.alibaba.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Folding parameter struct vm_area_struct *vma, unsigned long addr into struct vm_fault vmf, this makes func path more readble. Signed-off-by: Alex Shi Cc: Andrew Morton Cc: Seth Jennings Cc: Dan Streetman Cc: Vitaly Wool Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- include/linux/swap.h | 6 ++---- mm/madvise.c | 11 +++++++---- mm/swap_state.c | 23 ++++++++++------------- mm/swapfile.c | 8 +++++--- mm/zswap.c | 3 ++- 5 files changed, 26 insertions(+), 25 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index c453d08e07fb..6ca3adf62fe0 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -417,11 +417,9 @@ extern struct page *lookup_swap_cache(swp_entry_t entry, struct vm_area_struct *vma, unsigned long addr); extern struct page *read_swap_cache_async(swp_entry_t, gfp_t, - struct vm_area_struct *vma, unsigned long addr, - bool do_poll); + struct vm_fault *vmf, bool do_poll); extern struct page *__read_swap_cache_async(swp_entry_t, gfp_t, - struct vm_area_struct *vma, unsigned long addr, - bool *new_page_allocated); + struct vm_fault *vmf, bool *new_page_allocated); extern struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag, struct vm_fault *vmf); extern struct page *swapin_readahead(swp_entry_t entry, gfp_t flag, diff --git a/mm/madvise.c b/mm/madvise.c index 4bb30ed6c8d2..e9bd80087dbb 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -184,8 +184,8 @@ static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start, unsigned long end, struct mm_walk *walk) { pte_t *orig_pte; - struct vm_area_struct *vma = walk->private; unsigned long index; + struct vm_fault vmf = { .vma = walk->private}; if (pmd_none_or_trans_huge_or_clear_bad(pmd)) return 0; @@ -196,7 +196,8 @@ static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start, struct page *page; spinlock_t *ptl; - orig_pte = pte_offset_map_lock(vma->vm_mm, pmd, start, &ptl); + orig_pte = pte_offset_map_lock(vmf.vma->vm_mm, + pmd, start, &ptl); pte = *(orig_pte + ((index - start) / PAGE_SIZE)); pte_unmap_unlock(orig_pte, ptl); @@ -206,8 +207,9 @@ static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start, if (unlikely(non_swap_entry(entry))) continue; + vmf.address = index; page = read_swap_cache_async(entry, GFP_HIGHUSER_MOVABLE, - vma, index, false); + &vmf, false); if (page) put_page(page); } @@ -226,6 +228,7 @@ static void force_shm_swapin_readahead(struct vm_area_struct *vma, pgoff_t index; struct page *page; swp_entry_t swap; + struct vm_fault vmf = { .vma = NULL, .address = 0}; for (; start < end; start += PAGE_SIZE) { index = ((start - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff; @@ -238,7 +241,7 @@ static void force_shm_swapin_readahead(struct vm_area_struct *vma, } swap = radix_to_swp_entry(page); page = read_swap_cache_async(swap, GFP_HIGHUSER_MOVABLE, - NULL, 0, false); + &vmf, false); if (page) put_page(page); } diff --git a/mm/swap_state.c b/mm/swap_state.c index 26fded65c30d..b056c7ec941f 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -357,11 +357,12 @@ struct page *lookup_swap_cache(swp_entry_t entry, struct vm_area_struct *vma, } struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, - struct vm_area_struct *vma, unsigned long addr, - bool *new_page_allocated) + struct vm_fault *vmf, bool *new_page_allocated) { struct swap_info_struct *si; struct page *page; + struct vm_area_struct *vma = vmf->vma; + unsigned long addr = vmf->address; *new_page_allocated = false; @@ -453,11 +454,11 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, * the swap entry is no longer in use. */ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, - struct vm_area_struct *vma, unsigned long addr, bool do_poll) + struct vm_fault *vmf, bool do_poll) { bool page_was_allocated; struct page *retpage = __read_swap_cache_async(entry, gfp_mask, - vma, addr, &page_was_allocated); + vmf, &page_was_allocated); if (page_was_allocated) swap_readpage(retpage, do_poll); @@ -554,8 +555,6 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, struct swap_info_struct *si = swp_swap_info(entry); struct blk_plug plug; bool do_poll = true, page_allocated; - struct vm_area_struct *vma = vmf->vma; - unsigned long addr = vmf->address; mask = swapin_nr_pages(offset) - 1; if (!mask) @@ -582,7 +581,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, /* Ok, do the async read-ahead now */ page = __read_swap_cache_async( swp_entry(swp_type(entry), offset), - gfp_mask, vma, addr, &page_allocated); + gfp_mask, vmf, &page_allocated); if (!page) continue; if (page_allocated) { @@ -598,7 +597,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, lru_add_drain(); /* Push any new pages onto the LRU now */ skip: - return read_swap_cache_async(entry, gfp_mask, vma, addr, do_poll); + return read_swap_cache_async(entry, gfp_mask, vmf, do_poll); } int init_swap_address_space(unsigned int type, unsigned long nr_pages) @@ -730,7 +729,6 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask, struct vm_fault *vmf) { struct blk_plug plug; - struct vm_area_struct *vma = vmf->vma; struct page *page; pte_t *pte, pentry; swp_entry_t entry; @@ -753,8 +751,8 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask, entry = pte_to_swp_entry(pentry); if (unlikely(non_swap_entry(entry))) continue; - page = __read_swap_cache_async(entry, gfp_mask, vma, - vmf->address, &page_allocated); + page = __read_swap_cache_async(entry, gfp_mask, vmf, + &page_allocated); if (!page) continue; if (page_allocated) { @@ -769,8 +767,7 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask, blk_finish_plug(&plug); lru_add_drain(); skip: - return read_swap_cache_async(fentry, gfp_mask, vma, vmf->address, - ra_info.win == 1); + return read_swap_cache_async(fentry, gfp_mask, vmf, ra_info.win == 1); } /** diff --git a/mm/swapfile.c b/mm/swapfile.c index e41074848f25..0c4d604fbf8d 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1850,12 +1850,14 @@ static inline int pte_same_as_swp(pte_t pte, pte_t swp_pte) * just let do_wp_page work it out if a write is requested later - to * force COW, vm_page_prot omits write permission from any private vma. */ -static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, - unsigned long addr, swp_entry_t entry, struct page *page) +static int unuse_pte(struct vm_fault *vmf, swp_entry_t entry, struct page *page) { struct page *swapcache; spinlock_t *ptl; pte_t *pte; + struct vm_area_struct *vma = vmf->vma; + unsigned long addr = vmf->address; + pmd_t *pmd = vmf->pmd; int ret = 1; swapcache = page; @@ -1938,7 +1940,7 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd, lock_page(page); wait_on_page_writeback(page); - ret = unuse_pte(vma, pmd, addr, entry, page); + ret = unuse_pte(&vmf, entry, page); if (ret < 0) { unlock_page(page); put_page(page); diff --git a/mm/zswap.c b/mm/zswap.c index fbb782924ccc..ef5a3fe442d6 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -846,9 +846,10 @@ static int zswap_get_swap_cache_page(swp_entry_t entry, struct page **retpage) { bool page_was_allocated; + struct vm_fault vmf = { .vma = NULL, .address = 0}; *retpage = __read_swap_cache_async(entry, GFP_KERNEL, - NULL, 0, &page_was_allocated); + &vmf, &page_was_allocated); if (page_was_allocated) return ZSWAP_SWAPCACHE_NEW; if (!*retpage)