From patchwork Sat Jul 17 06:59:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Miaohe Lin X-Patchwork-Id: 12383219 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6705C636CA for ; Sat, 17 Jul 2021 06:59:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4663D613C4 for ; Sat, 17 Jul 2021 06:59:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4663D613C4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4BED28D00F4; Sat, 17 Jul 2021 02:59:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 495A68D00EC; Sat, 17 Jul 2021 02:59:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1FD768D00F7; Sat, 17 Jul 2021 02:59:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0222.hostedemail.com [216.40.44.222]) by kanga.kvack.org (Postfix) with ESMTP id E77D58D00F5 for ; Sat, 17 Jul 2021 02:59:16 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 966082AF0E for ; Sat, 17 Jul 2021 06:59:15 +0000 (UTC) X-FDA: 78371178270.14.02FEADA Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf16.hostedemail.com (Postfix) with ESMTP id C3275F00008E for ; Sat, 17 Jul 2021 06:59:14 +0000 (UTC) Received: from dggeme703-chm.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4GRf2d6sLkzYcwP; Sat, 17 Jul 2021 14:53:29 +0800 (CST) Received: from huawei.com (10.175.124.27) by dggeme703-chm.china.huawei.com (10.1.199.99) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.2; Sat, 17 Jul 2021 14:59:10 +0800 From: Miaohe Lin To: CC: , , , , , , , , , , , , , , , , Subject: [PATCH v2 1/4] mm/vmscan: remove the PageDirty check after MADV_FREE pages are page_ref_freezed Date: Sat, 17 Jul 2021 14:59:08 +0800 Message-ID: <20210717065911.61497-2-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210717065911.61497-1-linmiaohe@huawei.com> References: <20210717065911.61497-1-linmiaohe@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggeme703-chm.china.huawei.com (10.1.199.99) X-CFilter-Loop: Reflected X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: C3275F00008E X-Stat-Signature: 9quam8kcabgmc98nndsyh7neafrhir7h Authentication-Results: imf16.hostedemail.com; dkim=none; spf=pass (imf16.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com; dmarc=pass (policy=none) header.from=huawei.com X-HE-Tag: 1626505154-978009 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If the MADV_FREE pages are redirtied before they could be reclaimed, put the pages back to anonymous LRU list by setting SwapBacked flag and the pages will be reclaimed in normal swapout way. But as Yu Zhao pointed out, "The page has only one reference left, which is from the isolation. After the caller puts the page back on lru and drops the reference, the page will be freed anyway. It doesn't matter which lru it goes." So we don't bother checking PageDirty here. [Yu Zhao's comment is also quoted in the code.] Signed-off-by: Miaohe Lin Reviewed-by: Yu Zhao --- mm/vmscan.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index a7602f71ec04..92a515e82b1b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1627,11 +1627,14 @@ static unsigned int shrink_page_list(struct list_head *page_list, /* follow __remove_mapping for reference */ if (!page_ref_freeze(page, 1)) goto keep_locked; - if (PageDirty(page)) { - page_ref_unfreeze(page, 1); - goto keep_locked; - } - + /* + * The page has only one reference left, which is + * from the isolation. After the caller puts the + * page back on lru and drops the reference, the + * page will be freed anyway. It doesn't matter + * which lru it goes. So we don't bother checking + * PageDirty here. + */ count_vm_event(PGLAZYFREED); count_memcg_page_event(page, PGLAZYFREED); } else if (!mapping || !__remove_mapping(mapping, page, true,