From patchwork Fri Mar 4 04:12:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 12768439 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E244CC433EF for ; Fri, 4 Mar 2022 04:12:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7546F8D0003; Thu, 3 Mar 2022 23:12:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 704348D0001; Thu, 3 Mar 2022 23:12:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5CD628D0003; Thu, 3 Mar 2022 23:12:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0216.hostedemail.com [216.40.44.216]) by kanga.kvack.org (Postfix) with ESMTP id 4F3098D0001 for ; Thu, 3 Mar 2022 23:12:56 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 0AA56998BD for ; Fri, 4 Mar 2022 04:12:56 +0000 (UTC) X-FDA: 79205383152.24.C1F1D4D Received: from mail-oi1-f180.google.com (mail-oi1-f180.google.com [209.85.167.180]) by imf05.hostedemail.com (Postfix) with ESMTP id 8DC2810000E for ; Fri, 4 Mar 2022 04:12:55 +0000 (UTC) Received: by mail-oi1-f180.google.com with SMTP id a6so6781938oid.9 for ; Thu, 03 Mar 2022 20:12:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:message-id:mime-version; bh=PRuNNSOaSPA1xJ1Kn/a3aLc5/2s1VKsF2eYgh8kaOK8=; b=ffbMpA3udYH5SJDZGY794b10RjeR79MyVBPEL8Pk6yDcJrslXeTV7d3D30jq6hhuQV PNq2s6sM+k/KsxTaQDumhFJY9Bqp4TC6/jigQ7ehBqw5NRNBIrRXkYT6gHaENJgx2JwQ yWxXnnY66X0TB7/zwgR6Tu6Fv8LH/zeSEgCZbAt0HTJm/TuWUmfBgyo7hYpXVDLHPyIP 5mPJxDF51Nh3O5YR76OxoR2fLeK96VPVK5DugQPXq3fymLmBNiE3XcshbxNTmRtRdWKA KwkxP/uqjQH67RmBoTgR8liJKRTkzUwNtzbW3rwzaxtluUMsyJviAFRxUzL9X4PWxdcW 5pfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:mime-version; bh=PRuNNSOaSPA1xJ1Kn/a3aLc5/2s1VKsF2eYgh8kaOK8=; b=FD2VI47u0d9ueoUujk5skkwyU0l/qy8pY5H+HVPtBq419DRk7mfBPEygYqV//jmNr9 83VfLwOQHdjs0inuqfBBBO64JZW7j8AftwveTUXJcck7/c2ZT8tOMPYJfvoHGLjB1auL vJRhX3J4ab9bLXkg4/virW268rtWI5tH5uqW2ZUNSuPPsyGttrPrxrkF3j2mmizoJRmi hSqL++eX7VbVUdEORBcOdDKwL/ux8QZe82tMrgxZv25g2Rvs4MqscGvQKhPp7euQ/TJo 1IhMLq135LkIpeTbKNHjj3SDsDh7AiMKmfuucSjAWJ4aqWasYPIhyU7n/VAeHwVK/0JX 3THQ== X-Gm-Message-State: AOAM531l3CkGeqXCqGHiRsk7Ac3fDUygcUGule9siXXUQviiqb84vr9+ dPIQrFc5+UCL+nZNrWYliq1ZzelxicGXWQ== X-Google-Smtp-Source: ABdhPJzJOYSwvCqZJgqd6vKDLYMe+HwMW1LMi6psUIWLsK20bcIfjqHtLsUYRN3sw8yZO7xuQiApTg== X-Received: by 2002:a05:6808:20a4:b0:2d5:1d0f:bba3 with SMTP id s36-20020a05680820a400b002d51d0fbba3mr7666408oiw.55.1646367174608; Thu, 03 Mar 2022 20:12:54 -0800 (PST) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id ec47-20020a0568708c2f00b000d9ca2e1904sm723354oab.45.2022.03.03.20.12.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Mar 2022 20:12:53 -0800 (PST) Date: Thu, 3 Mar 2022 20:12:42 -0800 (PST) From: Hugh Dickins X-X-Sender: hugh@ripple.anvils To: Andrew Morton cc: NeilBrown , Jan Kara , "Darrick J. Wong" , Dave Chinner , Matthew Wilcox , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kerne.org, linux-mm@kvack.org Subject: [PATCH mmotm] mm/fs: delete PF_SWAPWRITE Message-ID: <75e80e7-742d-e3bd-531-614db8961e4@google.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 8DC2810000E X-Stat-Signature: 8be69irk9kcj7wj9fnm1s1ad55fpfwsc Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=ffbMpA3u; spf=pass (imf05.hostedemail.com: domain of hughd@google.com designates 209.85.167.180 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1646367175-447850 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: PF_SWAPWRITE has been redundant since v3.2 commit ee72886d8ed5 ("mm: vmscan: do not writeback filesystem pages in direct reclaim"). Coincidentally, NeilBrown's current patch "remove inode_congested()" deletes may_write_to_inode(), which appeared to be the one function which took notice of PF_SWAPWRITE. But if you study the old logic, and the conditions under which may_write_to_inode() was called, you discover that flag and function have been pointless for a decade. Signed-off-by: Hugh Dickins --- fs/fs-writeback.c | 3 --- fs/xfs/libxfs/xfs_btree.c | 2 +- include/linux/sched.h | 1 - mm/migrate.c | 7 ------- mm/vmscan.c | 8 ++------ 5 files changed, 3 insertions(+), 18 deletions(-) --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -2196,7 +2196,6 @@ void wb_workfn(struct work_struct *work) long pages_written; set_worker_desc("flush-%s", bdi_dev_name(wb->bdi)); - current->flags |= PF_SWAPWRITE; if (likely(!current_is_workqueue_rescuer() || !test_bit(WB_registered, &wb->state))) { @@ -2225,8 +2224,6 @@ void wb_workfn(struct work_struct *work) wb_wakeup(wb); else if (wb_has_dirty_io(wb) && dirty_writeback_interval) wb_wakeup_delayed(wb); - - current->flags &= ~PF_SWAPWRITE; } /* --- a/fs/xfs/libxfs/xfs_btree.c +++ b/fs/xfs/libxfs/xfs_btree.c @@ -2818,7 +2818,7 @@ struct xfs_btree_split_args { * in any way. */ if (args->kswapd) - new_pflags |= PF_MEMALLOC | PF_SWAPWRITE | PF_KSWAPD; + new_pflags |= PF_MEMALLOC | PF_KSWAPD; current_set_flags_nested(&pflags, new_pflags); xfs_trans_set_context(args->cur->bc_tp); --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1700,7 +1700,6 @@ static inline int is_global_init(struct task_struct *tsk) * I am cleaning dirty pages from some other bdi. */ #define PF_KTHREAD 0x00200000 /* I am a kernel thread */ #define PF_RANDOMIZE 0x00400000 /* Randomize virtual address space */ -#define PF_SWAPWRITE 0x00800000 /* Allowed to write to swap */ #define PF_NO_SETAFFINITY 0x04000000 /* Userland is not allowed to meddle with cpus_mask */ #define PF_MCE_EARLY 0x08000000 /* Early kill for mce process policy */ #define PF_MEMALLOC_PIN 0x10000000 /* Allocation context constrained to zones which allow long term pinning. */ --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1350,7 +1350,6 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, bool is_thp = false; struct page *page; struct page *page2; - int swapwrite = current->flags & PF_SWAPWRITE; int rc, nr_subpages; LIST_HEAD(ret_pages); LIST_HEAD(thp_split_pages); @@ -1359,9 +1358,6 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, trace_mm_migrate_pages_start(mode, reason); - if (!swapwrite) - current->flags |= PF_SWAPWRITE; - thp_subpage_migration: for (pass = 0; pass < 10 && (retry || thp_retry); pass++) { retry = 0; @@ -1516,9 +1512,6 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, trace_mm_migrate_pages(nr_succeeded, nr_failed_pages, nr_thp_succeeded, nr_thp_failed, nr_thp_split, mode, reason); - if (!swapwrite) - current->flags &= ~PF_SWAPWRITE; - if (ret_succeeded) *ret_succeeded = nr_succeeded; --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4468,7 +4468,7 @@ static int kswapd(void *p) * us from recursively trying to free more memory as we're * trying to free the first piece of memory in the first place). */ - tsk->flags |= PF_MEMALLOC | PF_SWAPWRITE | PF_KSWAPD; + tsk->flags |= PF_MEMALLOC | PF_KSWAPD; set_freezable(); WRITE_ONCE(pgdat->kswapd_order, 0); @@ -4519,7 +4519,7 @@ static int kswapd(void *p) goto kswapd_try_sleep; } - tsk->flags &= ~(PF_MEMALLOC | PF_SWAPWRITE | PF_KSWAPD); + tsk->flags &= ~(PF_MEMALLOC | PF_KSWAPD); return 0; } @@ -4760,11 +4760,8 @@ static int __node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned in fs_reclaim_acquire(sc.gfp_mask); /* * We need to be able to allocate from the reserves for RECLAIM_UNMAP - * and we also need to be able to write out pages for RECLAIM_WRITE - * and RECLAIM_UNMAP. */ noreclaim_flag = memalloc_noreclaim_save(); - p->flags |= PF_SWAPWRITE; set_task_reclaim_state(p, &sc.reclaim_state); if (node_pagecache_reclaimable(pgdat) > pgdat->min_unmapped_pages) { @@ -4778,7 +4775,6 @@ static int __node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned in } set_task_reclaim_state(p, NULL); - current->flags &= ~PF_SWAPWRITE; memalloc_noreclaim_restore(noreclaim_flag); fs_reclaim_release(sc.gfp_mask); psi_memstall_leave(&pflags);