From patchwork Wed Sep 26 21:08:50 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 10616847 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D0786174A for ; Wed, 26 Sep 2018 21:09:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C02412B774 for ; Wed, 26 Sep 2018 21:09:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B3D712B7F2; Wed, 26 Sep 2018 21:09:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 05E862B774 for ; Wed, 26 Sep 2018 21:09:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 84E7A8E0005; Wed, 26 Sep 2018 17:09:06 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7FDF28E0001; Wed, 26 Sep 2018 17:09:06 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 651A88E0005; Wed, 26 Sep 2018 17:09:06 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk1-f199.google.com (mail-qk1-f199.google.com [209.85.222.199]) by kanga.kvack.org (Postfix) with ESMTP id 383B58E0001 for ; Wed, 26 Sep 2018 17:09:06 -0400 (EDT) Received: by mail-qk1-f199.google.com with SMTP id e3-v6so429698qkj.17 for ; Wed, 26 Sep 2018 14:09:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=ESNx+ZeXEpnsFX3NMmn2fGNCRcRxntWtJSga04sSIlA=; b=EAwPB8FF6B63UsQy5YtxofxrBZXNocWRrq7/8o9tu3yxv5fl0Jqt5KC+616fb0Smss gjqb8uWXao20EN7UEGB/DKbXqH2T3On3RQ6hWaw0ZrfchKk/5lwvgQQPlfhrh7AQsDBO lETGI5kPUwHwjQ43Uc1AwTe9dzH05obzlvyHET085/BwqbuhhlZ93PKs1ploZBj4fZ/h PVVnfJ9/tr919EhQJ3ph8BIZVI27ayn1XcpneqUkht0pW/Q6ZGPwS8let8C6Cc0rTz/M 2pB+F2celHWifsyxCEwMw3TNY+G/PBi1QnVpbiblcS3RSXXPfQWibbXi4WfkQ+kmRIEk p+jQ== X-Gm-Message-State: ABuFfoh0QEScijv6KriyRdZZqdRb9/IUa3hzoxK6EV6J+EcKPoud1Wa+ B0AaUpOrpgmj5kSisMjKGLBwGpRkghwg6iACopK01ge8hvOjyUbbFyKoyZJoAgS7xiDN4oz15H5 KC6/L89xX6OkkFQUExBxe8Ld9Uuo6K9wJMqAUnPIL9Rt4LoTbGsgb/wcWJ5DaXGT1hsQQurh2Vl r/pyDjf2D7fm6q8SdUD4Bz45i6/diKz13bjiENS2Nfw0s2S/WvgP7yNesmOC41hQPBKQdTdNXts LkYIjy9a11WR/ZRXDdQEFBgfU+NGpVSMej4j4lb/SPMBemb8gSGZHeDQXtaBFSGyo0UqMHUgWdj BTqe3GSDrADCmq2YUG+LCa2Vxfk/A5wgZBLSCu2ktbzBXpcqBD+GzxZsh5zxsvmEnTs2fXgqNX/ I X-Received: by 2002:a37:b101:: with SMTP id a1-v6mr5663582qkf.222.1537996145963; Wed, 26 Sep 2018 14:09:05 -0700 (PDT) X-Received: by 2002:a37:b101:: with SMTP id a1-v6mr5663532qkf.222.1537996145149; Wed, 26 Sep 2018 14:09:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537996145; cv=none; d=google.com; s=arc-20160816; b=m/Z8JYz5fbb7sQ2xiMpB7+mUnUzm6yD19gO1MEf86zutR7jsNZ2AW53OzunDprSvsa 9ackPUWGfuTaT/ElAE03NZb1TKA+Jqk6bE7WtdUfvVh1Q+31TlQizgn2gPF99cwwbomB 3Qq4oa79otHU24WtoeS6WJVKl0nYOeUdSZWdC3Aj03lT4hjb6UnFQA01Axjpt6aElGf4 Y1tXP4ptDQUH3x/BN7rp0+9N+bil4AdHELbeyUni3q1g85og6FgQvraXiNNKn39Hgb9N iS0jADoY682UKv/460FGNAZzVvGDTanAYHSd8Ucpo4IvcYcSO21zt5TYw5/qX6/7Qy1X B2BQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ESNx+ZeXEpnsFX3NMmn2fGNCRcRxntWtJSga04sSIlA=; b=GPjbWbIqNx/rdautqA59U7pqFb/QhJwF9lm5e53oMVg/FNQOZ4AcOmQzAXGWp2zP9p Nz1GVvETHOhWl9r04+zWtrIyeaJUSXMTTIcy/lBKIAOabmmCVwMCmqQKRNrQP0AchF6/ nTiZa/kOSXUXHGcibMcObcQFa0xlU771lpYDvwNXbfqi9mAmhAXDKS0h6tFTjnIXWDXJ xC6/i4VWY8jlm7eKStIAax4ejjGloXHGL7boKmQsZbAhYyyXCJFNpzOHv9AgY5zc1nxD uJUHaUU2ko47NNsmA+F+Et4RG45aJjepzqHbkzf8wYmDja8N9AuqkTBaO1Op421yRSmW A66g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@toxicpanda-com.20150623.gappssmtp.com header.s=20150623 header.b=XwT1iJ14; spf=neutral (google.com: 209.85.220.65 is neither permitted nor denied by best guess record for domain of josef@toxicpanda.com) smtp.mailfrom=josef@toxicpanda.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id d18-v6sor45757qve.108.2018.09.26.14.09.05 for (Google Transport Security); Wed, 26 Sep 2018 14:09:05 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.65 is neither permitted nor denied by best guess record for domain of josef@toxicpanda.com) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@toxicpanda-com.20150623.gappssmtp.com header.s=20150623 header.b=XwT1iJ14; spf=neutral (google.com: 209.85.220.65 is neither permitted nor denied by best guess record for domain of josef@toxicpanda.com) smtp.mailfrom=josef@toxicpanda.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ESNx+ZeXEpnsFX3NMmn2fGNCRcRxntWtJSga04sSIlA=; b=XwT1iJ14b0Y8zOY8wRLonfvQWFerAbtD/ma0JeyNfblznWcTzR6PMdo9Gxvn9vQ8Fo uJRpjsCeIPuBeJq5zpQr6RHFH6VBi0clsuwsqTKCsZbmEzxH15vrjtdd8uNMhsbvFzZf Hl8ArPBKxVjv2pIQKeQFmmkEU5eTC6L+jYdtM6fXnAMAHjQiSx39qKqFn+r0NaffcCgc PkTpgRAn0fmPpVCVO3ysC9zDbghlWIBiP7vkQyI7tP5/VAZp0Onh02N/aWIbPJXUVJND pTNKQcRXTFfCsy4cMFQUzmTIBL1Eekk3mAfbRQPKS/SRB2RZE8pGpEwD+vUVG8dtSp+h OvSA== X-Google-Smtp-Source: ACcGV612awagRXq/MVl2xwKC547NGtGmnifm3iz1tSpVjUxR1GQEkPMPqVcQoudSYXdj3D6A8K2fHg== X-Received: by 2002:a0c:9557:: with SMTP id m23-v6mr5789549qvm.138.1537996144815; Wed, 26 Sep 2018 14:09:04 -0700 (PDT) Received: from localhost ([107.15.81.208]) by smtp.gmail.com with ESMTPSA id x11-v6sm63930qtc.48.2018.09.26.14.09.03 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 26 Sep 2018 14:09:03 -0700 (PDT) From: Josef Bacik To: kernel-team@fb.com, linux-kernel@vger.kernel.org, hannes@cmpxchg.org, tj@kernel.org, linux-fsdevel@vger.kernel.org, akpm@linux-foundation.org, riel@redhat.com, linux-mm@kvack.org, linux-btrfs@vger.kernel.org Cc: Johannes Weiner Subject: [PATCH 3/9] mm: clean up swapcache lookup and creation function names Date: Wed, 26 Sep 2018 17:08:50 -0400 Message-Id: <20180926210856.7895-4-josef@toxicpanda.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180926210856.7895-1-josef@toxicpanda.com> References: <20180926210856.7895-1-josef@toxicpanda.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Johannes Weiner __read_swap_cache_async() has a misleading name. All it does is look up or create a page in swapcache; it doesn't initiate any IO. The swapcache has many parallels to the page cache, and shares naming schemes with it elsewhere. Analogous to the cache lookup and creation API, rename __read_swap_cache_async() find_or_create_swap_cache() and lookup_swap_cache() to find_swap_cache(). Signed-off-by: Johannes Weiner Signed-off-by: Josef Bacik --- include/linux/swap.h | 14 ++++++++------ mm/memory.c | 2 +- mm/shmem.c | 2 +- mm/swap_state.c | 43 ++++++++++++++++++++++--------------------- mm/zswap.c | 8 ++++---- 5 files changed, 36 insertions(+), 33 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 8e2c11e692ba..293a84c34448 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -412,15 +412,17 @@ extern void __delete_from_swap_cache(struct page *); extern void delete_from_swap_cache(struct page *); extern void free_page_and_swap_cache(struct page *); extern void free_pages_and_swap_cache(struct page **, int); -extern struct page *lookup_swap_cache(swp_entry_t entry, - struct vm_area_struct *vma, - unsigned long addr); +extern struct page *find_swap_cache(swp_entry_t entry, + struct vm_area_struct *vma, + unsigned long addr); +extern struct page *find_or_create_swap_cache(swp_entry_t entry, + gfp_t gfp_mask, + struct vm_area_struct *vma, + unsigned long addr, + bool *created); extern struct page *read_swap_cache_async(swp_entry_t, gfp_t, struct vm_area_struct *vma, unsigned long addr, bool do_poll); -extern struct page *__read_swap_cache_async(swp_entry_t, gfp_t, - struct vm_area_struct *vma, unsigned long addr, - bool *new_page_allocated); extern struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag, struct vm_fault *vmf); extern struct page *swapin_readahead(swp_entry_t entry, gfp_t flag, diff --git a/mm/memory.c b/mm/memory.c index 433075f722ea..6f8abde84986 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2935,7 +2935,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) delayacct_set_flag(DELAYACCT_PF_SWAPIN); - page = lookup_swap_cache(entry, vma, vmf->address); + page = find_swap_cache(entry, vma, vmf->address); swapcache = page; if (!page) { diff --git a/mm/shmem.c b/mm/shmem.c index 0376c124b043..9854903ae92f 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1679,7 +1679,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, if (swap.val) { /* Look it up and read it in.. */ - page = lookup_swap_cache(swap, NULL, 0); + page = find_swap_cache(swap, NULL, 0); if (!page) { /* Or update major stats only when swapin succeeds?? */ if (fault_type) { diff --git a/mm/swap_state.c b/mm/swap_state.c index ecee9c6c4cc1..bae758e19f7a 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -330,8 +330,8 @@ static inline bool swap_use_vma_readahead(void) * lock getting page table operations atomic even if we drop the page * lock before returning. */ -struct page *lookup_swap_cache(swp_entry_t entry, struct vm_area_struct *vma, - unsigned long addr) +struct page *find_swap_cache(swp_entry_t entry, struct vm_area_struct *vma, + unsigned long addr) { struct page *page; @@ -374,19 +374,20 @@ struct page *lookup_swap_cache(swp_entry_t entry, struct vm_area_struct *vma, return page; } -struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, +struct page *find_or_create_swap_cache(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, unsigned long addr, - bool *new_page_allocated) + bool *created) { struct page *found_page, *new_page = NULL; struct address_space *swapper_space = swap_address_space(entry); int err; - *new_page_allocated = false; + + *created = false; do { /* * First check the swap cache. Since this is normally - * called after lookup_swap_cache() failed, re-calling + * called after find_swap_cache() failed, re-calling * that would confuse statistics. */ found_page = find_get_page(swapper_space, swp_offset(entry)); @@ -449,7 +450,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, * Initiate read into locked page and return. */ lru_cache_add_anon(new_page); - *new_page_allocated = true; + *created = true; return new_page; } radix_tree_preload_end(); @@ -475,14 +476,14 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, unsigned long addr, bool do_poll) { - bool page_was_allocated; - struct page *retpage = __read_swap_cache_async(entry, gfp_mask, - vma, addr, &page_was_allocated); + struct page *page; + bool created; - if (page_was_allocated) - swap_readpage(retpage, do_poll); + page = find_or_create_swap_cache(entry, gfp_mask, vma, addr, &created); + if (created) + swap_readpage(page, do_poll); - return retpage; + return page; } static unsigned int __swapin_nr_pages(unsigned long prev_offset, @@ -573,7 +574,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, unsigned long mask; struct swap_info_struct *si = swp_swap_info(entry); struct blk_plug plug; - bool do_poll = true, page_allocated; + bool do_poll = true, created; struct vm_area_struct *vma = vmf->vma; unsigned long addr = vmf->address; @@ -593,12 +594,12 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, blk_start_plug(&plug); for (offset = start_offset; offset <= end_offset ; offset++) { /* Ok, do the async read-ahead now */ - page = __read_swap_cache_async( + page = find_or_create_swap_cache( swp_entry(swp_type(entry), offset), - gfp_mask, vma, addr, &page_allocated); + gfp_mask, vma, addr, &created); if (!page) continue; - if (page_allocated) { + if (created) { swap_readpage(page, false); if (offset != entry_offset) { SetPageReadahead(page); @@ -738,7 +739,7 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask, pte_t *pte, pentry; swp_entry_t entry; unsigned int i; - bool page_allocated; + bool created; struct vma_swap_readahead ra_info = {0,}; swap_ra_info(vmf, &ra_info); @@ -756,11 +757,11 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask, entry = pte_to_swp_entry(pentry); if (unlikely(non_swap_entry(entry))) continue; - page = __read_swap_cache_async(entry, gfp_mask, vma, - vmf->address, &page_allocated); + page = find_or_create_swap_cache(entry, gfp_mask, vma, + vmf->address, &created); if (!page) continue; - if (page_allocated) { + if (created) { swap_readpage(page, false); if (i != ra_info.offset) { SetPageReadahead(page); diff --git a/mm/zswap.c b/mm/zswap.c index cd91fd9d96b8..6f05faa75766 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -823,11 +823,11 @@ enum zswap_get_swap_ret { static int zswap_get_swap_cache_page(swp_entry_t entry, struct page **retpage) { - bool page_was_allocated; + bool created; - *retpage = __read_swap_cache_async(entry, GFP_KERNEL, - NULL, 0, &page_was_allocated); - if (page_was_allocated) + *retpage = find_or_create_swap_cache(entry, GFP_KERNEL, + NULL, 0, &created); + if (created) return ZSWAP_SWAPCACHE_NEW; if (!*retpage) return ZSWAP_SWAPCACHE_FAIL;