From patchwork Mon Jul 19 03:18:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 12384625 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C07EBC12002 for ; Mon, 19 Jul 2021 03:18:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 643E3610A5 for ; Mon, 19 Jul 2021 03:18:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 643E3610A5 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DD8198D0103; Sun, 18 Jul 2021 23:18:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D88D18D00FA; Sun, 18 Jul 2021 23:18:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C29458D0103; Sun, 18 Jul 2021 23:18:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0140.hostedemail.com [216.40.44.140]) by kanga.kvack.org (Postfix) with ESMTP id 9B80B8D00FA for ; Sun, 18 Jul 2021 23:18:53 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 3B384138E6 for ; Mon, 19 Jul 2021 03:18:52 +0000 (UTC) X-FDA: 78377880504.25.11B8A35 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf08.hostedemail.com (Postfix) with ESMTP id C9BFE30000A8 for ; Mon, 19 Jul 2021 03:18:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:Message-ID: Subject:Cc:To:From:Date:Sender:Reply-To:Content-Transfer-Encoding:Content-ID: Content-Description:In-Reply-To:References; bh=6d4XEx0lOJFBXcSIffgLzB4JhYEq4AbWIFdSuGAnnmk=; b=WhOIyvKV3jepRyndAa5QiUwGG3 UeyMQSae9sTqj6ORgzlFDZeGdAMT2yRRmocyXU/IIRHEWhZvLU7ORJ3sqjh5Q7kop9thkYiFWVrIa 9+cPK0Km97RsI3SZVkrf0kOtcmMXcUMo69AZq7k1UuA9ZZCigPnLAsjaLabNqZKJktADO/OOq8mFb BZzkB//6HehTv5ZCIK7SgXMSqKMvP+lL7nQNFor3XOk8MsB6QheoQ+29bKGi6gAegp0l6/o26vrQH 59btXl8QjqQJmPlQXKhBx//HLPcqUUlqjsDn1c8azfuiz5u7ZdQS0U2SNBNkfHLBezRSDi6sCGMTN rPNFMV4w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m5JnP-006UQv-Ao; Mon, 19 Jul 2021 03:18:33 +0000 Date: Mon, 19 Jul 2021 04:18:19 +0100 From: Matthew Wilcox To: Stephen Rothwell Cc: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Andrew Morton , Linus Torvalds Subject: Folio tree for next Message-ID: MIME-Version: 1.0 Content-Disposition: inline Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=WhOIyvKV; spf=none (imf08.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Stat-Signature: gb61y4tiidmkyi8ptejwfwnofwuttdis X-Rspamd-Queue-Id: C9BFE30000A8 X-HE-Tag: 1626664731-23036 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Stephen, Please include a new tree in linux-next: https://git.infradead.org/users/willy/pagecache.git/shortlog/refs/heads/for-next aka git://git.infradead.org/users/willy/pagecache.git for-next There are some minor conflicts with mmotm. I resolved some of them by pulling in three patches from mmotm and rebasing on top of them. These conflicts (or near-misses) still remain, and I'm showing my resolution: +++ b/arch/arm/include/asm/cacheflush.h @@@ -290,8 -290,8 +290,9 @@@ extern void flush_cache_page(struct vm_ */ #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 extern void flush_dcache_page(struct page *); + void flush_dcache_folio(struct folio *folio); +#define ARCH_IMPLEMENTS_FLUSH_KERNEL_VMAP_RANGE 1 static inline void flush_kernel_vmap_range(void *addr, int size) { if ((cache_is_vivt() || cache_is_vipt_aliasing())) +++ b/mm/filemap.c @@@ -836,9 -833,9 +838,9 @@@ void replace_page_cache_page(struct pag new->mapping = mapping; new->index = offset; - mem_cgroup_migrate(old, new); + mem_cgroup_migrate(fold, fnew); - xas_lock_irqsave(&xas, flags); + xas_lock_irq(&xas); xas_store(&xas, new); old->mapping = NULL; +++ b/mm/page-writeback.c @@@ -2739,34 -2751,17 +2763,35 @@@ bool folio_clear_dirty_for_io(struct fo unlocked_inode_to_wb_end(inode, &cookie); return ret; } - return TestClearPageDirty(page); + return folio_test_clear_dirty(folio); } - EXPORT_SYMBOL(clear_page_dirty_for_io); + EXPORT_SYMBOL(folio_clear_dirty_for_io); +static void wb_inode_writeback_start(struct bdi_writeback *wb) +{ + atomic_inc(&wb->writeback_inodes); +} + +static void wb_inode_writeback_end(struct bdi_writeback *wb) +{ + atomic_dec(&wb->writeback_inodes); + /* + * Make sure estimate of writeback throughput gets updated after + * writeback completed. We delay the update by BANDWIDTH_INTERVAL + * (which is the interval other bandwidth updates use for batching) so + * that if multiple inodes end writeback at a similar time, they get + * batched into one bandwidth update. + */ + queue_delayed_work(bdi_wq, &wb->bw_dwork, BANDWIDTH_INTERVAL); +} + - int test_clear_page_writeback(struct page *page) + bool __folio_end_writeback(struct folio *folio) { - struct address_space *mapping = page_mapping(page); - int ret; + long nr = folio_nr_pages(folio); + struct address_space *mapping = folio_mapping(folio); + bool ret; - lock_page_memcg(page); + folio_memcg_lock(folio); if (mapping && mapping_use_writeback_tags(mapping)) { struct inode *inode = mapping->host; struct backing_dev_info *bdi = inode_to_bdi(inode); @@@ -2780,11 -2775,8 +2805,11 @@@ if (bdi->capabilities & BDI_CAP_WRITEBACK_ACCT) { struct bdi_writeback *wb = inode_to_wb(inode); - dec_wb_stat(wb, WB_WRITEBACK); - __wb_writeout_inc(wb); + wb_stat_mod(wb, WB_WRITEBACK, -nr); + __wb_writeout_add(wb, nr); + if (!mapping_tagged(mapping, + PAGECACHE_TAG_WRITEBACK)) + wb_inode_writeback_end(wb); } } @@@ -2827,18 -2821,14 +2854,18 @@@ bool __folio_start_writeback(struct fol PAGECACHE_TAG_WRITEBACK); xas_set_mark(&xas, PAGECACHE_TAG_WRITEBACK); - if (bdi->capabilities & BDI_CAP_WRITEBACK_ACCT) - wb_stat_mod(inode_to_wb(inode), WB_WRITEBACK, - nr); + if (bdi->capabilities & BDI_CAP_WRITEBACK_ACCT) { + struct bdi_writeback *wb = inode_to_wb(inode); + - inc_wb_stat(wb, WB_WRITEBACK); ++ wb_stat_mod(wb, WB_WRITEBACK, nr); + if (!on_wblist) + wb_inode_writeback_start(wb); + } /* - * We can come through here when swapping anonymous - * pages, so we don't necessarily have an inode to track - * for sync. + * We can come through here when swapping + * anonymous folios, so we don't necessarily + * have an inode to track for sync. */ if (mapping->host && !on_wblist) sb_mark_inode_writeback(mapping->host); diff --cc mm/page-writeback.c index 57b98ea365e2,c2987f05c944..96b69365de65 --- a/mm/page-writeback.c