From patchwork Sun Nov 19 19:47:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13460667 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AEC9CC54FB9 for ; Sun, 19 Nov 2023 19:48:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 35DC66B039B; Sun, 19 Nov 2023 14:48:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2E3F76B03A2; Sun, 19 Nov 2023 14:48:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 10F426B03A4; Sun, 19 Nov 2023 14:48:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id EFC0D6B039B for ; Sun, 19 Nov 2023 14:48:40 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id BB3901A0759 for ; Sun, 19 Nov 2023 19:48:40 +0000 (UTC) X-FDA: 81475741200.02.79956D0 Received: from mail-io1-f53.google.com (mail-io1-f53.google.com [209.85.166.53]) by imf05.hostedemail.com (Postfix) with ESMTP id EE1AF100012 for ; Sun, 19 Nov 2023 19:48:38 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=eHgxJ25a; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf05.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.166.53 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700423319; a=rsa-sha256; cv=none; b=J50sVlzUq9X3GuSseTyaMCQStUWJAHB6PUDfFQnDd1WooC/hrcQjrHfH0QiLgTB3jj0Fd1 afWYTjBtI/BQQvOnJhINSuO4KEJteNOhe653P0zAd5dTwSP165rRVzHg750CTBlbMMoqPA wGzhZbUQYp7F1wElJ1aY3B4QrFtNR8M= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=eHgxJ25a; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf05.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.166.53 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700423319; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Vy77DasHH4iZw3fxXV06AQqlBWBp94UgYk7CDc75jBU=; b=oRcXBUUmTRHE1Ae874EXve14L3KJYKvHbM2113w/53ripwUkQH4xm7N+94yYqJtzTxjhkF 3nS/hxQ/0mYmvG+S5QREu9I08fmDBdYxNh193rhx7nT+y1wKxTkbxwkRsazrLXkw6cJlBb Aw2YFJhd4esofwy5vGCuLN0jPlS9oBU= Received: by mail-io1-f53.google.com with SMTP id ca18e2360f4ac-7a93b7fedc8so175627339f.1 for ; Sun, 19 Nov 2023 11:48:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700423317; x=1701028117; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=Vy77DasHH4iZw3fxXV06AQqlBWBp94UgYk7CDc75jBU=; b=eHgxJ25apAMvFwXXpg7QCq+SM6OMAmoU09XTEjVIojn4UgGGPbcb3wH55j1cWbqOuv zG/adDkUW21LHbLObE3BjB4E5PCBJ53MdyJOJKoI3h7oRsNB5Wc36KrraYPxjT5Gh/UV 4udxM3QtgCmwu9pIBAL5o05grvRleHceFRA7i3qMsQcHX+jJtaVcC9clumkWkbP4erlV GrYfLjQsYY6YNxkz8PFHqj4cEWJIDU/vevxwpOrPQMtwHZmM6qwCgOO9ExWWrP3WhjH3 dd/3TGZZhyP859pexsJ4cgqWks4mtH7Q3sAEC9LX3YDMI5DaEg3HZMyEaDNtqyP8/eUg Gyqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700423317; x=1701028117; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=Vy77DasHH4iZw3fxXV06AQqlBWBp94UgYk7CDc75jBU=; b=eiN8T4vSaCm5wy1Zsz9JptzbUaMAYSguVHyu5h51tW7N+GUWLc6/m5I7IyA1oFw6Jg U/gEHJCJPtYtNUH+Om4XzbJc+RcKd7ppKidrWg3jtdzQCS8GssjHVyGm3z9FIG0mSYv8 28eyRa9y7vSk94mTGErrKSkNgsRjsvAPdH19mXaI9eXcMViuRak6nNY1v5MMG7Z1NVfx fjiawn9h5X6hCje3RAgpGlPxxJ1wS4/NgcGph/ADt7kK9MT26BAGzOFqkDK9h2m5zenr PXYd44YPPoig6oiG037pPIYsSECa8BXRBDn0yjqn0QyLTR65SentWAe8l4aTWdpJaNYz 68mg== X-Gm-Message-State: AOJu0YwgVzcwbN9zx7YP0x6U00K3Yr++kvPujKdZBanDegSjVzZT/aUE 0aPORq2huDlZEGanj3bQz5MPvAj2z+gruoqy X-Google-Smtp-Source: AGHT+IECybsdgtqaEHEB6jhiCW9wgMllnD9l2cqLKV8BEgk8bfpyCeOF7+1eRGrvZhfSvcvWoreVgg== X-Received: by 2002:a92:cbc2:0:b0:35a:f493:5667 with SMTP id s2-20020a92cbc2000000b0035af4935667mr5462158ilq.20.1700423317470; Sun, 19 Nov 2023 11:48:37 -0800 (PST) Received: from KASONG-MB2.tencent.com ([115.171.40.79]) by smtp.gmail.com with ESMTPSA id a6-20020aa78646000000b006cb7feae74fsm1237140pfo.164.2023.11.19.11.48.34 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 19 Nov 2023 11:48:36 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 11/24] mm/swap: also handle swapcache lookup in swapin_readahead Date: Mon, 20 Nov 2023 03:47:27 +0800 Message-ID: <20231119194740.94101-12-ryncsn@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231119194740.94101-1-ryncsn@gmail.com> References: <20231119194740.94101-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: EE1AF100012 X-Stat-Signature: 79tj5judroewhe84ti1e13yka7w3mfib X-HE-Tag: 1700423318-909995 X-HE-Meta: U2FsdGVkX19mfzxLV1TD0xXPbfo7z7btQKOCgPKof/LfeBhJKn1ZtVOZ+lQXdAr/rbtfNP1RsaDZ9xApjbBQKFMklU3RjNKFkD/k8OsauWsuSjCZ8symPI9tnJ27HjFSdVMxDO5NzSYlqsok2vP+kMiiAQzAwSl+uq4mIhbFrmEEPj13PPshJ8FHJRfqfN0WXT5/tfBFfHrPE2vLPTZjvP05E1rI4F0XKdf3ZVjbeyihPiywjfhVi7tPy1ITDx589I6VUnEBnIMbpYkvQKcQG80anPtgtC0H0wBP7Joz5HlEyTYY8R6a5w4hKvz3mrYjlWRrVBCxUgCnD2kUBZT6S525dw7Uw5rONd+AcyJQh96vQC5yDOgb7tpZ8Xc/06GgM/79RtO81f6ydm8L7PSJ8+9G2HsfUNYk65l3TLp3kgjaooiCcsQ6F/CkRhfFD/8PjJUl+wpH6qexTVjLtsxg1999WR4GJ6GzHk00nQEg2XmvzrNxC+A3P04aWEG8ygn+0hLsfo811x13I9bToBjBopUYerdTt2j53815Dobs786VyjXN+kPYjZSdhJfJOeOBNqodScuK8t5CI+nebvRm1CmzPzAB2n2tTLk1uwrsuvD/MMYWQSfyiP9uAcfAGbasJw99k7BTH5TT/MfC6/HFdenEb4t2mb7RMZ8qdIERFPA/cQQ0BxCFSl45n7LC5alCHWgzj8HU6dGAMtdUpDDs3q9GjkhhBOlMabUtZGquEFtkhG2TEyZT2/vu/CTrNhq4q0wegnsyNNRTVm8FePU/EVaxp09nROoY5bfwMALzTIYloZa+KBzkHngze35cHdyZ/43J9vVJoTjmc8kwMLYBIucDNL6IH1P93Ik9HFm9uVj9h2W4x28xmbJDJDRG2K7oxOBjtwFmLF+iKwZhYzTgXBVQsXcGOb4yIqhWKuJeES1otT7Hfs+YkQexGQFi/N9v5yLpZp/dVRbKqeHsH6W wruTFg1s rFBE/g/Xxg1pC90pqNIiUJwd8QItwSLsj38ol52iXGG+5CHASrUFhERo/garR5Ajz4fqzl9XlWUHkw5tNgUnE7sCVCPRQCfd1eKKUNcPsbVR8YiGtXsSUgNX3+kATvn1JhZy+EYFXKM3xw8ge2jtij3xRtwo03ICeIcItpPrmuI5GMpy4wt4hjIAmwWSegFw9eFdRXhq4VmPcSHUn5GmQRMIk4meOdn4UPIHZaMEM7s7/1JNK5CjsRIvCa/DAOrkGX8MrVoLs/0SkTfkteHqYHZ2mVz39VG369WDqbOTp3Sf/GsTJe7wDW9qoO0jdiwE9Pk2q2aLJiKkWD0PbHlnjABB63BufVQ0K8uQBXqtrztXYSVcsdAl5o6m6aMGxpD2f/3+bomdQ43Mn0LZ26Cm7A1uSgpo7fpms85pEJw8WJTs6huJAMRmtvJAKUeqMvmsic0Bi2h56+PTFH1gBXvHGC/19e2jtk83GbhVoV1Sz9cuF5U/CICrtwojiL8lUBJovVeCi/LW0kGILvSkXMnie+YnuQ8dNuoXAc2fr+iVPnK7ZMlpwwkalBGDxmq1kGPRB4est8Xq+Bv30pNbJwteFrny9mDCWdyHOW2TVUE0C65EAZBSKo5I56I7S3kobSUKm38Ckhao3a7RO6V/VcUkI0cvXHM/sjnbc9TS3ZEw2Ql/FWQY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song No feature change, just prepare for later commits. Signed-off-by: Kairui Song --- mm/memory.c | 61 +++++++++++++++++++++++-------------------------- mm/swap.h | 10 ++++++-- mm/swap_state.c | 26 +++++++++++++-------- mm/swapfile.c | 30 +++++++++++------------- 4 files changed, 66 insertions(+), 61 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index f4237a2e3b93..22af9f3e8c75 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3786,13 +3786,13 @@ static vm_fault_t handle_pte_marker(struct vm_fault *vmf) vm_fault_t do_swap_page(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; - struct folio *swapcache, *folio = NULL; + struct folio *swapcache = NULL, *folio = NULL; + enum swap_cache_result cache_result; struct page *page; struct swap_info_struct *si = NULL; rmap_t rmap_flags = RMAP_NONE; bool exclusive = false; swp_entry_t entry; - bool swapcached; pte_t pte; vm_fault_t ret = 0; @@ -3850,42 +3850,37 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (unlikely(!si)) goto out; - folio = swap_cache_get_folio(entry, vma, vmf->address); - if (folio) - page = folio_file_page(folio, swp_offset(entry)); - swapcache = folio; - - if (!folio) { - page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, - vmf, &swapcached); - if (page) { - folio = page_folio(page); - if (swapcached) - swapcache = folio; - } else { + page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, + vmf, &cache_result); + if (page) { + folio = page_folio(page); + if (cache_result != SWAP_CACHE_HIT) { + /* Had to read the page from swap area: Major fault */ + ret = VM_FAULT_MAJOR; + count_vm_event(PGMAJFAULT); + count_memcg_event_mm(vma->vm_mm, PGMAJFAULT); + } + if (cache_result != SWAP_CACHE_BYPASS) + swapcache = folio; + if (PageHWPoison(page)) { /* - * Back out if somebody else faulted in this pte - * while we released the pte lock. + * hwpoisoned dirty swapcache pages are kept for killing + * owner processes (which may be unknown at hwpoison time) */ - vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, - vmf->address, &vmf->ptl); - if (likely(vmf->pte && - pte_same(ptep_get(vmf->pte), vmf->orig_pte))) - ret = VM_FAULT_OOM; - goto unlock; + ret = VM_FAULT_HWPOISON; + goto out_release; } - - /* Had to read the page from swap area: Major fault */ - ret = VM_FAULT_MAJOR; - count_vm_event(PGMAJFAULT); - count_memcg_event_mm(vma->vm_mm, PGMAJFAULT); - } else if (PageHWPoison(page)) { + } else { /* - * hwpoisoned dirty swapcache pages are kept for killing - * owner processes (which may be unknown at hwpoison time) + * Back out if somebody else faulted in this pte + * while we released the pte lock. */ - ret = VM_FAULT_HWPOISON; - goto out_release; + vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, + vmf->address, &vmf->ptl); + if (likely(vmf->pte && + pte_same(ptep_get(vmf->pte), vmf->orig_pte))) + ret = VM_FAULT_OOM; + goto unlock; } ret |= folio_lock_or_retry(folio, vmf); diff --git a/mm/swap.h b/mm/swap.h index a9a654af791e..ac9136eee690 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -30,6 +30,12 @@ extern struct address_space *swapper_spaces[]; (&swapper_spaces[swp_type(entry)][swp_offset(entry) \ >> SWAP_ADDRESS_SPACE_SHIFT]) +enum swap_cache_result { + SWAP_CACHE_HIT, + SWAP_CACHE_MISS, + SWAP_CACHE_BYPASS, +}; + void show_swap_cache_info(void); bool add_to_swap(struct folio *folio); void *get_shadow_from_swap_cache(swp_entry_t entry); @@ -55,7 +61,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag, struct mempolicy *mpol, pgoff_t ilx); struct page *swapin_readahead(swp_entry_t entry, gfp_t flag, - struct vm_fault *vmf, bool *swapcached); + struct vm_fault *vmf, enum swap_cache_result *result); static inline unsigned int folio_swap_flags(struct folio *folio) { @@ -92,7 +98,7 @@ static inline struct page *swap_cluster_readahead(swp_entry_t entry, } static inline struct page *swapin_readahead(swp_entry_t swp, gfp_t gfp_mask, - struct vm_fault *vmf, bool *swapcached) + struct vm_fault *vmf, enum swap_cache_result *result) { return NULL; } diff --git a/mm/swap_state.c b/mm/swap_state.c index d87c20f9f7ec..e96d63bf8a22 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -908,8 +908,7 @@ static struct page *swapin_no_readahead(swp_entry_t entry, gfp_t gfp_mask, * @entry: swap entry of this memory * @gfp_mask: memory allocation flags * @vmf: fault information - * @swapcached: pointer to a bool used as indicator if the - * page is swapped in through swapcache. + * @result: a return value to indicate swap cache usage. * * Returns the struct page for entry and addr, after queueing swapin. * @@ -918,30 +917,39 @@ static struct page *swapin_no_readahead(swp_entry_t entry, gfp_t gfp_mask, * or vma-based(ie, virtual address based on faulty address) readahead. */ struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, - struct vm_fault *vmf, bool *swapcached) + struct vm_fault *vmf, enum swap_cache_result *result) { + enum swap_cache_result cache_result; struct swap_info_struct *si; struct mempolicy *mpol; + struct folio *folio; struct page *page; pgoff_t ilx; - bool cached; + + folio = swap_cache_get_folio(entry, vmf->vma, vmf->address); + if (folio) { + page = folio_file_page(folio, swp_offset(entry)); + cache_result = SWAP_CACHE_HIT; + goto done; + } si = swp_swap_info(entry); mpol = get_vma_policy(vmf->vma, vmf->address, 0, &ilx); if (swap_use_no_readahead(si, swp_offset(entry))) { page = swapin_no_readahead(entry, gfp_mask, mpol, ilx, vmf->vma->vm_mm); - cached = false; + cache_result = SWAP_CACHE_BYPASS; } else if (swap_use_vma_readahead(si)) { page = swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf); - cached = true; + cache_result = SWAP_CACHE_MISS; } else { page = swap_cluster_readahead(entry, gfp_mask, mpol, ilx); - cached = true; + cache_result = SWAP_CACHE_MISS; } mpol_cond_put(mpol); - if (swapcached) - *swapcached = cached; +done: + if (result) + *result = cache_result; return page; } diff --git a/mm/swapfile.c b/mm/swapfile.c index 01c3f53b6521..b6d57fff5e21 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1822,13 +1822,21 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd, si = swap_info[type]; do { - struct folio *folio; + struct page *page; unsigned long offset; unsigned char swp_count; + struct folio *folio = NULL; swp_entry_t entry; int ret; pte_t ptent; + struct vm_fault vmf = { + .vma = vma, + .address = addr, + .real_address = addr, + .pmd = pmd, + }; + if (!pte++) { pte = pte_offset_map(pmd, addr); if (!pte) @@ -1847,22 +1855,10 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd, offset = swp_offset(entry); pte_unmap(pte); pte = NULL; - - folio = swap_cache_get_folio(entry, vma, addr); - if (!folio) { - struct page *page; - struct vm_fault vmf = { - .vma = vma, - .address = addr, - .real_address = addr, - .pmd = pmd, - }; - - page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, - &vmf, NULL); - if (page) - folio = page_folio(page); - } + page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, + &vmf, NULL); + if (page) + folio = page_folio(page); if (!folio) { /* * The entry could have been freed, and will not