From patchwork Sun Nov 19 19:47:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13460674 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AEC10C072A2 for ; Sun, 19 Nov 2023 19:49:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4BCEA6B03BB; Sun, 19 Nov 2023 14:49:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4455C6B03BF; Sun, 19 Nov 2023 14:49:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2BED56B03C4; Sun, 19 Nov 2023 14:49:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 10E616B03BB for ; Sun, 19 Nov 2023 14:49:04 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id E4CA9120569 for ; Sun, 19 Nov 2023 19:49:03 +0000 (UTC) X-FDA: 81475742166.13.FE7D92E Received: from mail-pf1-f180.google.com (mail-pf1-f180.google.com [209.85.210.180]) by imf10.hostedemail.com (Postfix) with ESMTP id 160D9C000C for ; Sun, 19 Nov 2023 19:49:01 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=YF43tdsq; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf10.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.180 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700423342; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ogKmBoCop4FjzMsXxzq+m99gJioFg3IUepp9Cnvtmi4=; b=Xxz9FuwunKnCeeNllqtGEObzhi8iu24xKW/Vq15DTACcSfqRGX0+QMrOdUxWHA443Rcc8F WIus5L77Z6xuc3VHe2c4KDvlbtzhWC53fsuBOz7z4bH/dCHcVWveum8CnH1pLucGRLsfCu jWc5NWbdq+rGFTgdGvEpBqs7NXWEfSc= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=YF43tdsq; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf10.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.210.180 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700423342; a=rsa-sha256; cv=none; b=I/haQXivQcaWGnj7/SaifeUnnvoVm8kGZn6ipzGhh9L7TF11RSbkcwykUJbJzU0RzEICCP KDzH3rFtlkibDQS3L9BCwy0aKKsjJqWHFoqdP/f17J/H//EYOz5o2IZNAAUblGfhN8LsY0 L6JPlGazrobrc2/7isbGDgu+5A5qtiE= Received: by mail-pf1-f180.google.com with SMTP id d2e1a72fcca58-6c4d06b6ddaso3055878b3a.3 for ; Sun, 19 Nov 2023 11:49:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700423340; x=1701028140; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=ogKmBoCop4FjzMsXxzq+m99gJioFg3IUepp9Cnvtmi4=; b=YF43tdsq2Dy0YJIraJYvPbQVn3dytDT96SfETw7iDyp6onSftfq8jc4NH4jMrk3tfS mtyBmux2m1PKc3gVtCj360hR+ed8Ygr2K6yjPVYT7Ph7E3vm9Uz6VCLOeJeZViNwSLiE u0VyqzNd+v6i/JYqppu99z5HljniexH+AXG04snogGGL95QjqMNjpusTNcKnRNuFSkqa BKgHQAu39/S+mKshtnJ8BO+/ehaUH+VJ9IGA5thaYQB/Ia0sBsE9xFW6Su/KlYel1qek AzOHWtevQM85HeJS3N6uYf/0pQVo0Ai7IY5+Yg4SSRNSBqqznKcwzEj9OxNL+veL00Zd Kl3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700423340; x=1701028140; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=ogKmBoCop4FjzMsXxzq+m99gJioFg3IUepp9Cnvtmi4=; b=ctJGtvOL7/eGQO/0U/VVj76VV65iuyXJxr5Q+gzKjreeDIxYhRJv1n0sLdCAq28Tro vfMmf2O1oRg83LJnV0HkbWWClH6d/ZoFfIMIQDMsJNakZtMss7HkLKEM2uCPJ3Sju8sp jQg1yhKngwxK+38xSAuy398C6aYmnyIjfI3kczo4DiFl6D6m7Rjlnrllz2AeoOdnpGtV hbgqB8pl+nzSuUyOm6tjuh4ZT/lzCIY5JC8oZbjCDaCD+dt5fGl4X14Xgbm4vRWMk/2I loUIH52/ekLrwJLuWPoQETxLxW/yG43DSbOOMLWjjRTfpHVkB1RaStAK/VKs9ttn9e3c e5AA== X-Gm-Message-State: AOJu0YygHEM8oDGkyuSr4ofomlPXhH7IXah5UJv4ODRA4K1gluVcscGF 8fdYU6sMZJhnwEJyluGOlrBwwfRlut5wt7/T X-Google-Smtp-Source: AGHT+IEtFX0YlsrOdaCPyUWZ1WjriHhJN7FSpvj0uTb3jwG62CN+QG7TWo6/q5IUmUrNJVLup3+uwA== X-Received: by 2002:a05:6a00:301d:b0:693:3c11:4293 with SMTP id ay29-20020a056a00301d00b006933c114293mr4840721pfb.14.1700423339887; Sun, 19 Nov 2023 11:48:59 -0800 (PST) Received: from KASONG-MB2.tencent.com ([115.171.40.79]) by smtp.gmail.com with ESMTPSA id a6-20020aa78646000000b006cb7feae74fsm1237140pfo.164.2023.11.19.11.48.56 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 19 Nov 2023 11:48:59 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , "Huang, Ying" , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH 18/24] mm/swap: introduce a helper non fault swapin Date: Mon, 20 Nov 2023 03:47:34 +0800 Message-ID: <20231119194740.94101-19-ryncsn@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231119194740.94101-1-ryncsn@gmail.com> References: <20231119194740.94101-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: 1kiabr8joo99g73f9tpwfc3k9hdh4nsc X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 160D9C000C X-HE-Tag: 1700423341-344533 X-HE-Meta: U2FsdGVkX18k9XGoM7Fae+Y0mCOCvtfh2zhfEue9uY8JxepHFka3x1aX2aBALSqxnasO658YjmT+b6+paUDd/rBffHL+Flf9npTy3fqu29WVPkzWkZOUwogaCPm4LiQk8PgjmzObnKZ/wDhyh0yCJ0VTmkQWlYkzKs52ZmP/DKGQC8bYxui7f7Vt2XzQFS5jfb48JW5yTp7T/+3ybVvXZ0eMyVFYMpgcm82GPaZtZfL8GT5Qvizr0XGYfkbkEipFkseb9c6tqft3OoPzInVXDlymdIfmGq1VokQWddSHlGGWH2b6IoFRBsXDo23wRTu233Y5B0bhumk0HlL0SWuvTtORTN8Wg0QSS1EUPuP/IN2fsHz880G0dZa73WKWmv4Px8i577NwamEQHkc5LbcEmWSJKPqB8XXsMMVbdh7D3ABWnfND1uj/3pXfZJeGOscT42RuPfKAumRAk9Pw1Tg76I8MRBToPY5NYuHWE5Up+r+jyMHEKlAkZefQbqQvs09gw74u/5yA7HUTkbK4JcOYr6MKOQPRql2bC0m4k+QuTD45I/2C60njxdmiZplXvCk9jAnkoB87aE0xUZy0tvL0xaOsSG36yd3QKQWhc3xrAIKW6h0yY0881tG3DUbTBqf8UIFqogG6A1Z1PpKSwPVGcRx46RqiLrAoY2QNzDXt+nSzAHIir5qBJ0QO1257wjI1Z/KjIZyDPzni+ukC7iub2B2ntu26daWyXfFlUa+wYQV43zTw5/i21YHMn9XqXSfO803jd4wDGbt+qZ7zBy3UiIjsnyseJsnDXa5VKHrNu8leZvCsTFaCY5j0flV3nCDa04t5cyHnGYDHqvMTrn3Rbg8+enYynWgPu36bdaKTiLDx9M/rJRE1Ds4kEYRL5n24vjBiOr8/YjJSBZ1QDQgyt2Pety16ULOfQfR54rd5RyqKUQcLzZB8y9PFPcdH+AQGOqFcICmFeuYVjs6fKsi ZMfV7pp6 guiYLrZ/PMq3keDQ65vNKYco2yzMwgcENfxEBTuMRZFoxLzSiagGqgdcqeq5iiI6Rq3/ib+3YMmfe7enQGe/PaGVYzvxiLidt1wFcgMTe/CoY+oaDG7/FS2RSWQcOoxNkxORqq/Yiwby2Br/O/L0vwgOWW3hDEDs02FCT299/7eLOnL5i80HELxUaey5OYtN2ROxjB362U4+fCDD9hLcCGJr9xGT/DJhB5SWhOn5mHxXyqXcieDnE33SO2X6ZwGd1fzPq8A703g+I6B0Gih6F0Rh80d79PI46JrGIxTZrD3VYu53wURd1xgUIvP0fEzTBKl6SQsEVjIDwfRIxkmIL5PVXnk5JgMEJ2eAGLYd5sKTkmnvi/pVAIrBLj7D+Dr4yQUNM37+qc+eo34DNGrayRff7cE+IRDf8sr35VJ/hZ4qw92uKU0s+JQfjkQTqOB9WdXmawscHd35E4o7XUexck0CtxlK04USKyZ7zgtad1KVgodaPMwQ4PFM6VlizUTxA+g6rgRyCCOd/tR5CDdR4wW/YEMrmGoEgYy0W3EMhCOx5ufJx8spBkOKEuGwnPsJoz424x2FSmwaSX5re48bTC6DyxL72rLYbOvvw/En+uByeSr4JeZqwWWtOashNcLPgYV6k X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song There are two places where swapin is not direct caused by page fault: shmem swapin is invoked through shmem mapping, swapoff cause swapin by walking the page table. They used to construct a pseudo vmfault struct for swapin function. Shmem has dropped the pseudo vmfault recently in commit ddc1a5cbc05d ("mempolicy: alloc_pages_mpol() for NUMA policy without vma"). Swapoff path is still using a pseudo vmfault. Introduce a helper for them both, this help save stack usage for swapoff path, and help apply a unified swapin cache and readahead policy check. Also prepare for follow up commits. Signed-off-by: Kairui Song --- mm/shmem.c | 51 ++++++++++++++++--------------------------------- mm/swap.h | 11 +++++++++++ mm/swap_state.c | 38 ++++++++++++++++++++++++++++++++++++ mm/swapfile.c | 23 +++++++++++----------- 4 files changed, 76 insertions(+), 47 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index f9ce4067c742..81d129aa66d1 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1565,22 +1565,6 @@ static inline struct mempolicy *shmem_get_sbmpol(struct shmem_sb_info *sbinfo) static struct mempolicy *shmem_get_pgoff_policy(struct shmem_inode_info *info, pgoff_t index, unsigned int order, pgoff_t *ilx); -static struct folio *shmem_swapin_cluster(swp_entry_t swap, gfp_t gfp, - struct shmem_inode_info *info, pgoff_t index) -{ - struct mempolicy *mpol; - pgoff_t ilx; - struct page *page; - - mpol = shmem_get_pgoff_policy(info, index, 0, &ilx); - page = swap_cluster_readahead(swap, gfp, mpol, ilx); - mpol_cond_put(mpol); - - if (!page) - return NULL; - return page_folio(page); -} - /* * Make sure huge_gfp is always more limited than limit_gfp. * Some of the flags set permissions, while others set limitations. @@ -1854,9 +1838,12 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, { struct address_space *mapping = inode->i_mapping; struct shmem_inode_info *info = SHMEM_I(inode); - struct swap_info_struct *si; + enum swap_cache_result result; struct folio *folio = NULL; + struct mempolicy *mpol; + struct page *page; swp_entry_t swap; + pgoff_t ilx; int error; VM_BUG_ON(!*foliop || !xa_is_value(*foliop)); @@ -1866,34 +1853,30 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, if (is_poisoned_swp_entry(swap)) return -EIO; - si = get_swap_device(swap); - if (!si) { + mpol = shmem_get_pgoff_policy(info, index, 0, &ilx); + page = swapin_page_non_fault(swap, gfp, mpol, ilx, fault_mm, &result); + mpol_cond_put(mpol); + + if (PTR_ERR(page) == -EBUSY) { if (!shmem_confirm_swap(mapping, index, swap)) return -EEXIST; else return -EINVAL; - } - - /* Look it up and read it in.. */ - folio = swap_cache_get_folio(swap, NULL, NULL); - if (!folio) { - /* Or update major stats only when swapin succeeds?? */ - if (fault_type) { + } else if (!page) { + error = -ENOMEM; + goto failed; + } else { + folio = page_folio(page); + if (fault_type && result != SWAP_CACHE_HIT) { *fault_type |= VM_FAULT_MAJOR; count_vm_event(PGMAJFAULT); count_memcg_event_mm(fault_mm, PGMAJFAULT); } - /* Here we actually start the io */ - folio = shmem_swapin_cluster(swap, gfp, info, index); - if (!folio) { - error = -ENOMEM; - goto failed; - } } /* We have to do this with folio locked to prevent races */ folio_lock(folio); - if (!folio_test_swapcache(folio) || + if ((result != SWAP_CACHE_BYPASS && !folio_test_swapcache(folio)) || folio->swap.val != swap.val || !shmem_confirm_swap(mapping, index, swap)) { error = -EEXIST; @@ -1930,7 +1913,6 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, delete_from_swap_cache(folio); folio_mark_dirty(folio); swap_free(swap); - put_swap_device(si); *foliop = folio; return 0; @@ -1944,7 +1926,6 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, folio_unlock(folio); folio_put(folio); } - put_swap_device(si); return error; } diff --git a/mm/swap.h b/mm/swap.h index da9deb5ba37d..b073c29c9790 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -62,6 +62,10 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag, struct mempolicy *mpol, pgoff_t ilx); struct page *swapin_readahead(swp_entry_t entry, gfp_t flag, struct vm_fault *vmf, enum swap_cache_result *result); +struct page *swapin_page_non_fault(swp_entry_t entry, gfp_t gfp_mask, + struct mempolicy *mpol, pgoff_t ilx, + struct mm_struct *mm, + enum swap_cache_result *result); static inline unsigned int folio_swap_flags(struct folio *folio) { @@ -103,6 +107,13 @@ static inline struct page *swapin_readahead(swp_entry_t swp, gfp_t gfp_mask, return NULL; } +static inline struct page *swapin_page_non_fault(swp_entry_t entry, gfp_t gfp_mask, + struct mempolicy *mpol, pgoff_t ilx, struct mm_struct *mm, + enum swap_cache_result *result) +{ + return NULL; +} + static inline int swap_writepage(struct page *p, struct writeback_control *wbc) { return 0; diff --git a/mm/swap_state.c b/mm/swap_state.c index ff8a166603d0..eef66757c615 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -956,6 +956,44 @@ struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, return page; } +struct page *swapin_page_non_fault(swp_entry_t entry, gfp_t gfp_mask, + struct mempolicy *mpol, pgoff_t ilx, + struct mm_struct *mm, enum swap_cache_result *result) +{ + enum swap_cache_result cache_result; + struct swap_info_struct *si; + void *shadow = NULL; + struct folio *folio; + struct page *page; + + /* Prevent swapoff from happening to us */ + si = get_swap_device(entry); + if (unlikely(!si)) + return ERR_PTR(-EBUSY); + + folio = swap_cache_get_folio(entry, NULL, &shadow); + if (folio) { + page = folio_file_page(folio, swp_offset(entry)); + cache_result = SWAP_CACHE_HIT; + goto done; + } + + if (swap_use_no_readahead(si, swp_offset(entry))) { + page = swapin_no_readahead(entry, gfp_mask, mpol, ilx, mm); + if (shadow) + workingset_refault(page_folio(page), shadow); + cache_result = SWAP_CACHE_BYPASS; + } else { + page = swap_cluster_readahead(entry, gfp_mask, mpol, ilx); + cache_result = SWAP_CACHE_MISS; + } +done: + put_swap_device(si); + if (result) + *result = cache_result; + return page; +} + #ifdef CONFIG_SYSFS static ssize_t vma_ra_enabled_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) diff --git a/mm/swapfile.c b/mm/swapfile.c index 925ad92486a4..f8c5096fe0f0 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1822,20 +1822,15 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd, si = swap_info[type]; do { + int ret; + pte_t ptent; + pgoff_t ilx; + swp_entry_t entry; struct page *page; unsigned long offset; + struct mempolicy *mpol; unsigned char swp_count; struct folio *folio = NULL; - swp_entry_t entry; - int ret; - pte_t ptent; - - struct vm_fault vmf = { - .vma = vma, - .address = addr, - .real_address = addr, - .pmd = pmd, - }; if (!pte++) { pte = pte_offset_map(pmd, addr); @@ -1855,8 +1850,12 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd, offset = swp_offset(entry); pte_unmap(pte); pte = NULL; - page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, - &vmf, NULL); + + mpol = get_vma_policy(vma, addr, 0, &ilx); + page = swapin_page_non_fault(entry, GFP_HIGHUSER_MOVABLE, + mpol, ilx, vma->vm_mm, NULL); + mpol_cond_put(mpol); + if (IS_ERR(page)) return PTR_ERR(page); else if (page)