From patchwork Mon Jun 11 14:06:16 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10457977 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id E70396020F for ; Mon, 11 Jun 2018 14:10:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CA37F2846D for ; Mon, 11 Jun 2018 14:10:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C870228569; Mon, 11 Jun 2018 14:10:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5E90E2858F for ; Mon, 11 Jun 2018 14:10:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 417E26B028F; Mon, 11 Jun 2018 10:07:06 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3500E6B0293; Mon, 11 Jun 2018 10:07:06 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1CAA56B0294; Mon, 11 Jun 2018 10:07:06 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl0-f71.google.com (mail-pl0-f71.google.com [209.85.160.71]) by kanga.kvack.org (Postfix) with ESMTP id C37316B028F for ; Mon, 11 Jun 2018 10:07:05 -0400 (EDT) Received: by mail-pl0-f71.google.com with SMTP id e39-v6so4955867plb.10 for ; Mon, 11 Jun 2018 07:07:05 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=Y/xGYYXtotYGntdReNGF/yCzCA63LkwWsEkWoYLOKnA=; b=Dx8MyMBZRLv/qgElKKcr4knGzW/oP6sx0Evh/e0U1A49zoTi2h037JofN0m8anXsTo 54pq73Ur0o7cQALbet8rHx9T4vx2dqgqFUNCm5BDAHMPLt7BRwQuyZ2ZTs2Q850fXo/F oh6iGK0XnxQGmMwcBrvPCeFkMIVhVO65OAMd/Y5Dastj6nYKgqgSWOtcN0EyMUtVPQnF xXp4zOw11rRmKS82aPsqwluc1nfE1ostYIEspcsGG3CTY4kaSDojevJ4ysLRilVXEt+b 0DQcZivUHONCsZLInk8KVj14nwbLL0x8dVsIpWuBd9x+OeqAwrJmRykG0kXzPEUjdGPY QQcg== X-Gm-Message-State: APt69E21DZzPHRbJejYKahNKg47pEEvVsVtazGQkK4TeF2e+UO9pfzIT s9FBae7gP8L+2ZiYp0wCBKkPXMaGk0TV1lq4Ayg22/YgpRW/TOml7DCd52ds8FLWnl931IXpLAb nExgm2d8kaBK6wpHYpBwqGcVhPi+pXYOXx36yfcj+xYDCabqnf+8y1H93VBpa94lLMw== X-Received: by 2002:a17:902:700a:: with SMTP id y10-v6mr19102808plk.249.1528726025483; Mon, 11 Jun 2018 07:07:05 -0700 (PDT) X-Google-Smtp-Source: ADUXVKJ80+jWQUReAyb94TPIYXKcUt/liWfVu3w/vFFA5RkjwG2ICPCK6ieaMp3A/AID8i5OZpv8 X-Received: by 2002:a17:902:700a:: with SMTP id y10-v6mr19102723plk.249.1528726024458; Mon, 11 Jun 2018 07:07:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528726024; cv=none; d=google.com; s=arc-20160816; b=j/P0puj6RlynIwXvQv4ao9ymZdIMHVANm5n2bmXLDLCH2UtlLU3EpqmNd74V0Nr9d3 CfgXhYRhFNvaZk5TMhaEEAvuFktLq7zxgXIuR3lBOOeOisWfgdE5SfPJK7gROxnJ/foR xK4QfFCkdKx+glj4AHmCsqdM1eAPwryKdFZJmSTOvo88DEwhOFfY+j62YVHuwUmHxOys HsYwSJ9ZkRCx99bna2KSbyWu87+fnKTDqz7xHQIRTnArWRVcePaF7dnhP8dsx8kABczA hZoofKE5xC4nVfpn9zBpvHOGQYTbP6CX4kNtgScPHfGBICxQFV2KEV5bsrkMbOe0+Ure +w1g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=Y/xGYYXtotYGntdReNGF/yCzCA63LkwWsEkWoYLOKnA=; b=qpbeGKDekDfXyEtn0SQoFLVvSdLtbv7HbL/OAeA0bmU8uOrMl1eiSACrzOoxZAQz1G hAqcd/nzMpj7HaDU61zNSPHu4j1cSvofyNa0Ec3mLW73nNbDXKlZ2qvZcxpSEyTxfoIW 0AXbYMb2nAEmPQJ5tb1QpIpt08y6eHf4qolABdDEwEE3GV9IR6chjgkvxFg+CYnKxOfN S1ryle8DJnl7o58UDw+hll874ZZ/s2CJyaYN4c1yUgZ1nBMznGa6WnohsWudE4uBQBet FAV9jwYwRibhi+ir4GLY2vG/8AypPPm8jV7yPDNFas5uc38trubNLtpLwa3gEeMYu8fz l2fA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=FtZ0bvKB; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id j15-v6si2351725pgs.348.2018.06.11.07.07.04 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 11 Jun 2018 07:07:04 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=FtZ0bvKB; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=Y/xGYYXtotYGntdReNGF/yCzCA63LkwWsEkWoYLOKnA=; b=FtZ0bvKBvOSJ0YrtqhYJieQ6z /VIP7D5CvgaWJ43rCQT800R9U0KKYIrtPxuJDUhPJW5jL0xvZ3xv77scA1QqfllamMMK1edqgHqEN jLKD1CI6JqmDuzFXM1onHJnyXhfBxY1DdK4lZS1pDEOYvTH7PLY0J6QzIGMWa5MX2VxmIbaaLDpA2 5t0KUU5MjnWEr/3/WkaioVGp2rYQMLwfSDlsoLwDcT7jNSUv/fhg0QRxGrxCM+0ulYPLv1VQ4PgtS owMdcvV9GFsc127ex1dhnMtT1pcASRreN8J7zms+n6/SJySWiyDdL3Hj9+2bS4hV5fFdDVTjlxjAv RCc0UDoGA==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1fSNTL-0004t9-2l; Mon, 11 Jun 2018 14:07:03 +0000 From: Matthew Wilcox To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Matthew Wilcox , Jan Kara , Jeff Layton , Lukas Czerner , Ross Zwisler , Christoph Hellwig , Goldwyn Rodrigues , Nicholas Piggin , Ryusuke Konishi , linux-nilfs@vger.kernel.org, Jaegeuk Kim , Chao Yu , linux-f2fs-devel@lists.sourceforge.net Subject: [PATCH v13 49/72] shmem: Convert shmem_add_to_page_cache to XArray Date: Mon, 11 Jun 2018 07:06:16 -0700 Message-Id: <20180611140639.17215-50-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180611140639.17215-1-willy@infradead.org> References: <20180611140639.17215-1-willy@infradead.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Matthew Wilcox This removes the last caller of radix_tree_maybe_preload_order(). Simpler code, unless we run out of memory for new xa_nodes partway through inserting entries into the xarray. Hopefully we can support multi-index entries in the page cache soon and all the awful code goes away. Signed-off-by: Matthew Wilcox --- mm/shmem.c | 87 ++++++++++++++++++++++++------------------------------ 1 file changed, 39 insertions(+), 48 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 983a27656e2e..8e702b6d84a5 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -576,9 +576,10 @@ static inline bool is_huge_enabled(struct shmem_sb_info *sbinfo) */ static int shmem_add_to_page_cache(struct page *page, struct address_space *mapping, - pgoff_t index, void *expected) + pgoff_t index, void *expected, gfp_t gfp) { - int error, nr = hpage_nr_pages(page); + XA_STATE(xas, &mapping->i_pages, index); + unsigned long i, nr = 1UL << compound_order(page); VM_BUG_ON_PAGE(PageTail(page), page); VM_BUG_ON_PAGE(index != round_down(index, nr), page); @@ -587,49 +588,47 @@ static int shmem_add_to_page_cache(struct page *page, VM_BUG_ON(expected && PageTransHuge(page)); page_ref_add(page, nr); - page->mapping = mapping; page->index = index; + page->mapping = mapping; - xa_lock_irq(&mapping->i_pages); - if (PageTransHuge(page)) { - void __rcu **results; - pgoff_t idx; - int i; - - error = 0; - if (radix_tree_gang_lookup_slot(&mapping->i_pages, - &results, &idx, index, 1) && - idx < index + HPAGE_PMD_NR) { - error = -EEXIST; + do { + xas_lock_irq(&xas); + xas_create_range(&xas, index + nr - 1); + if (xas_error(&xas)) + goto unlock; + for (i = 0; i < nr; i++) { + void *entry = xas_load(&xas); + if (entry != expected) + xas_set_err(&xas, -ENOENT); + if (xas_error(&xas)) + goto undo; + xas_store(&xas, page + i); + xas_next(&xas); } - - if (!error) { - for (i = 0; i < HPAGE_PMD_NR; i++) { - error = radix_tree_insert(&mapping->i_pages, - index + i, page + i); - VM_BUG_ON(error); - } + if (PageTransHuge(page)) { count_vm_event(THP_FILE_ALLOC); + __inc_node_page_state(page, NR_SHMEM_THPS); } - } else if (!expected) { - error = radix_tree_insert(&mapping->i_pages, index, page); - } else { - error = shmem_replace_entry(mapping, index, expected, page); - } - - if (!error) { mapping->nrpages += nr; - if (PageTransHuge(page)) - __inc_node_page_state(page, NR_SHMEM_THPS); __mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, nr); __mod_node_page_state(page_pgdat(page), NR_SHMEM, nr); - xa_unlock_irq(&mapping->i_pages); - } else { + goto unlock; +undo: + while (i-- > 0) { + xas_store(&xas, NULL); + xas_prev(&xas); + } +unlock: + xas_unlock_irq(&xas); + } while (xas_nomem(&xas, gfp)); + + if (xas_error(&xas)) { page->mapping = NULL; - xa_unlock_irq(&mapping->i_pages); page_ref_sub(page, nr); + return xas_error(&xas); } - return error; + + return 0; } /* @@ -1182,7 +1181,7 @@ static int shmem_unuse_inode(struct shmem_inode_info *info, */ if (!error) error = shmem_add_to_page_cache(*pagep, mapping, index, - radswap); + radswap, gfp); if (error != -ENOMEM) { /* * Truncation and eviction use free_swap_and_cache(), which @@ -1698,7 +1697,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, false); if (!error) { error = shmem_add_to_page_cache(page, mapping, index, - swp_to_radix_entry(swap)); + swp_to_radix_entry(swap), gfp); /* * We already confirmed swap under page lock, and make * no memory allocation here, so usually no possibility @@ -1804,13 +1803,8 @@ alloc_nohuge: page = shmem_alloc_and_acct_page(gfp, inode, PageTransHuge(page)); if (error) goto unacct; - error = radix_tree_maybe_preload_order(gfp & GFP_RECLAIM_MASK, - compound_order(page)); - if (!error) { - error = shmem_add_to_page_cache(page, mapping, hindex, - NULL); - radix_tree_preload_end(); - } + error = shmem_add_to_page_cache(page, mapping, hindex, + NULL, gfp & GFP_RECLAIM_MASK); if (error) { mem_cgroup_cancel_charge(page, memcg, PageTransHuge(page)); @@ -2277,11 +2271,8 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, if (ret) goto out_release; - ret = radix_tree_maybe_preload(gfp & GFP_RECLAIM_MASK); - if (!ret) { - ret = shmem_add_to_page_cache(page, mapping, pgoff, NULL); - radix_tree_preload_end(); - } + ret = shmem_add_to_page_cache(page, mapping, pgoff, NULL, + gfp & GFP_RECLAIM_MASK); if (ret) goto out_release_uncharge;