From patchwork Sat Mar 13 07:57:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 12136569 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68982C433E6 for ; Sat, 13 Mar 2021 07:58:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CF9A764F1D for ; Sat, 13 Mar 2021 07:58:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CF9A764F1D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D5B5D6B006E; Sat, 13 Mar 2021 02:57:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D0E016B0070; Sat, 13 Mar 2021 02:57:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AC05E6B0071; Sat, 13 Mar 2021 02:57:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0168.hostedemail.com [216.40.44.168]) by kanga.kvack.org (Postfix) with ESMTP id 8E67A6B006E for ; Sat, 13 Mar 2021 02:57:59 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 3C855180013CC for ; Sat, 13 Mar 2021 07:57:59 +0000 (UTC) X-FDA: 77914097478.01.0F380AD Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf20.hostedemail.com (Postfix) with ESMTP id EB4567C85 for ; Sat, 13 Mar 2021 07:57:53 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id 131so43980ybb.6 for ; Fri, 12 Mar 2021 23:57:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=CiMQJrfmhcT/Xw28mTP3VlU5SRPV8bRsC232LNJk7tU=; b=U+bvs2P4aWZrWqfRwrXFunM/l5sWGKqRdiGQFJBSXwSH+vfw4kB3WjkPPQpoUgHwwx +4KITrOtke32as1JFmSOW/QJ8GYL6J2CyqtNZysfNDnr4dUu1eafFf0OU/BN2PlR6TZw u/bOTirXcAreUn8QrcDvxRKbQwugJdk2JWl2TqDc7KAmb0AodFb/pAgQnWip2QOqWta3 5ohqe66l196K6u9PNyDcJqEzz4CuJBMkGEAupVYjzX/HNuFZ1kLz2lz2FxSR38TqyX8I aWFk3lMppRkLaWpnC4nC3SQhZcJq9YOxrujTA4BSnsEwscy7qJzgt8w9xEMiwzmAoN1g Bm1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=CiMQJrfmhcT/Xw28mTP3VlU5SRPV8bRsC232LNJk7tU=; b=O1rEnao2xYG9rJ4zbVSub9G6GtyyLVcXBDT/GJPaFD/uSRYbIEMmfktBkSqu5QQsUn QKOrmmyFqs9+g7gokfSDvEsyH62lLIGAINglsz/DU1HRo686fPm5qRpqWz3++NNwLA9R +YGlQLG2APHF0RSCDTTThfd/E7aaxcf0jdJgm+YkF8JAO+WQkGcwnSNIjY3hG5/rVBel 0XAjGixi2qcTsiv4fu9OmouBEB3GQ6zWT/DGtA6EMvHBRYda9y40HAzi3WsAwCLBYrjR CO14WGIGbNGHBhwJKGAoosGtScSTdLP0XDG1WYmtd3Klx9x8YtEqCkX/A38TN+Y0Gy46 748A== X-Gm-Message-State: AOAM533zWPclJoAYu+ZpFha5Om1pV0HY0Lf9HB2HxyticilR7YoTyiXM CUO+Z2qygQo17kOqvM7FQPk/TgONV+2kLB3Ob+GIr08ZZ4jmdwmdFCrJL2hFZSR4lDIDpqkebin KXNZ1TB/cLXc6DkuZmZEf3WvxJa3i67wdj/vs9+61ALqKF2gBwduryDv6 X-Google-Smtp-Source: ABdhPJxUTV3MKesWpiHjP1OfC2lc1y93+U3j9zDFMK1igY41oF3Fyu2enmEiwR5c1z6fz0Ykw1sgyEJETNk= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:f931:d3e4:faa0:4f74]) (user=yuzhao job=sendgmr) by 2002:a25:dfd1:: with SMTP id w200mr24182984ybg.362.1615622277472; Fri, 12 Mar 2021 23:57:57 -0800 (PST) Date: Sat, 13 Mar 2021 00:57:34 -0700 In-Reply-To: <20210313075747.3781593-1-yuzhao@google.com> Message-Id: <20210313075747.3781593-2-yuzhao@google.com> Mime-Version: 1.0 References: <20210313075747.3781593-1-yuzhao@google.com> X-Mailer: git-send-email 2.31.0.rc2.261.g7f71774620-goog Subject: [PATCH v1 01/14] include/linux/memcontrol.h: do not warn in page_memcg_rcu() if !CONFIG_MEMCG From: Yu Zhao To: linux-mm@kvack.org Cc: Alex Shi , Andrew Morton , Dave Hansen , Hillf Danton , Johannes Weiner , Joonsoo Kim , Matthew Wilcox , Mel Gorman , Michal Hocko , Roman Gushchin , Vlastimil Babka , Wei Yang , Yang Shi , Ying Huang , linux-kernel@vger.kernel.org, page-reclaim@google.com, Yu Zhao X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: EB4567C85 X-Stat-Signature: 1fth1ae37dah65ttmubn4tgp8q1xgq59 Received-SPF: none (flex--yuzhao.bounces.google.com>: No applicable sender policy available) receiver=imf20; identity=mailfrom; envelope-from="<3hXBMYAYKCKAYUZHAOGOOGLE.COMLINUX-MMKVACK.ORG@flex--yuzhao.bounces.google.com>"; helo=mail-yb1-f202.google.com; client-ip=209.85.219.202 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615622273-869749 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We want to make sure the rcu lock is held while using page_memcg_rcu(). But having a WARN_ON_ONCE() in page_memcg_rcu() when !CONFIG_MEMCG is superfluous because of the following legit use case: memcg = lock_page_memcg(page1) (rcu_read_lock() if CONFIG_MEMCG=y) do something to page1 if (page_memcg_rcu(page2) == memcg) do something to page2 too as it cannot be migrated away from the memcg either. unlock_page_memcg(page1) (rcu_read_unlock() if CONFIG_MEMCG=y) This patch removes the WARN_ON_ONCE() from page_memcg_rcu() for the !CONFIG_MEMCG case. Signed-off-by: Yu Zhao --- include/linux/memcontrol.h | 1 - 1 file changed, 1 deletion(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index e6dc793d587d..f325aeb4b4e8 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1079,7 +1079,6 @@ static inline struct mem_cgroup *page_memcg(struct page *page) static inline struct mem_cgroup *page_memcg_rcu(struct page *page) { - WARN_ON_ONCE(!rcu_read_lock_held()); return NULL; } From patchwork Sat Mar 13 07:57:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 12136583 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0F63C433E0 for ; Sat, 13 Mar 2021 07:58:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 910A26148E for ; Sat, 13 Mar 2021 07:58:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 910A26148E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AEA246B0078; Sat, 13 Mar 2021 02:58:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A76DF6B007B; Sat, 13 Mar 2021 02:58:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 917196B007D; Sat, 13 Mar 2021 02:58:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0161.hostedemail.com [216.40.44.161]) by kanga.kvack.org (Postfix) with ESMTP id 6841A6B0078 for ; Sat, 13 Mar 2021 02:58:10 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 19CB2824805A for ; Sat, 13 Mar 2021 07:58:10 +0000 (UTC) X-FDA: 77914097940.08.243BE49 Received: from mail-qv1-f74.google.com (mail-qv1-f74.google.com [209.85.219.74]) by imf13.hostedemail.com (Postfix) with ESMTP id 9FE5EE0099F4 for ; Sat, 13 Mar 2021 07:57:57 +0000 (UTC) Received: by mail-qv1-f74.google.com with SMTP id bt20so10100661qvb.0 for ; Fri, 12 Mar 2021 23:57:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=8JNiZUfBDRjXaPYBpQeMTXnhFbhlF+xtdVruxqSh3cA=; b=IxvaJGLX6pUI5R+Y+pYVqc8/0gQYUuErDfxr3dMFZudBUHTyTTMgRdE0XBcCau7R3l WVURZXlPOHDzjeCRAmjp2GkNmw3M99Sx/iwvc+iSD8ohg8gx7Cj3TzTxPdzUCsghnFFL Hv+lIk4SezNfKgceCzGd/c+Rf6ueoDDPEzD62ZXkdDxk/uLmQc4GjBU0Knksz2+dLsVo b2U0CgK0WmdW2qIHy4OyEo2nBB4jmzDFCPxxlIobZYlIAsooUXen6yoe28K/2f1TgtI5 p1/lftklJT4PvJibzbIlGo6vGha2wAL2lU0ks0AxI4l1lf/1Bf/PVsGVaXXW7+F/fJFj Y3HQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=8JNiZUfBDRjXaPYBpQeMTXnhFbhlF+xtdVruxqSh3cA=; b=ox14IZRqyTUtI/li1Wg4hiME/NZb90rErcU3fRxNk+nYbxBD/DBvD0qVOxbOqdb9xa 5A48x3HjAC+O8Sjo5nvLUBey+Mrlo5uIUjAOnuA4PyfyQy2nr6nHw94cNoAPwgoSPnyx H5i74Q2gdXpQASNsTs3tvlJaug8IYb1ebuHG5CwC4LtB0He63cLAXlyYeoG0ZsAHkFJ3 zmIOUE+XHx178J2yeverCG8EmyJFDOEYGGgRA8gz41VNt54ylt55ScXHn41byqgnUsWe zUlv3HnjGwXqP3DSTABzz4yYmsAsqX0hZAwJoVmpXJDZvkv1xy4J3mT7vs/2iBrGwYZD 15tg== X-Gm-Message-State: AOAM530OiyMNFlI4x1zMpnR55+eg6nslv9vdpxIuvaP5XUiVamO8PPrq jcvWd2QrM77axf0T8h4ByYJWrMyxUV8BW7HMbXkXI1+6ykOPV1eHbU7aON50F+ftKEbWRRQG3N1 NRDy08WuTHjMMUNMuU3rnfpJnWxxIf6CH+Tri9LY7yfalvKPNc0511u54 X-Google-Smtp-Source: ABdhPJwORNu5jWRG63mEkIrwerIFX+r6WsDRo4k6jBRlD+35Q3Ytikr16dVGNSyDCuR7br75GADeWLQhEY0= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:f931:d3e4:faa0:4f74]) (user=yuzhao job=sendgmr) by 2002:ad4:5ce8:: with SMTP id iv8mr1757086qvb.16.1615622278881; Fri, 12 Mar 2021 23:57:58 -0800 (PST) Date: Sat, 13 Mar 2021 00:57:35 -0700 In-Reply-To: <20210313075747.3781593-1-yuzhao@google.com> Message-Id: <20210313075747.3781593-3-yuzhao@google.com> Mime-Version: 1.0 References: <20210313075747.3781593-1-yuzhao@google.com> X-Mailer: git-send-email 2.31.0.rc2.261.g7f71774620-goog Subject: [PATCH v1 02/14] include/linux/nodemask.h: define next_memory_node() if !CONFIG_NUMA From: Yu Zhao To: linux-mm@kvack.org Cc: Alex Shi , Andrew Morton , Dave Hansen , Hillf Danton , Johannes Weiner , Joonsoo Kim , Matthew Wilcox , Mel Gorman , Michal Hocko , Roman Gushchin , Vlastimil Babka , Wei Yang , Yang Shi , Ying Huang , linux-kernel@vger.kernel.org, page-reclaim@google.com, Yu Zhao X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 9FE5EE0099F4 X-Stat-Signature: zu8f794oe3q5eathupammi1wxgcssqst Received-SPF: none (flex--yuzhao.bounces.google.com>: No applicable sender policy available) receiver=imf13; identity=mailfrom; envelope-from="<3hnBMYAYKCKEZVaIBPHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--yuzhao.bounces.google.com>"; helo=mail-qv1-f74.google.com; client-ip=209.85.219.74 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615622277-929305 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently next_memory_node only exists when CONFIG_NUMA=y. This patch defines the macro for the !CONFIG_NUMA case. Signed-off-by: Yu Zhao --- include/linux/nodemask.h | 1 + 1 file changed, 1 insertion(+) diff --git a/include/linux/nodemask.h b/include/linux/nodemask.h index ac398e143c9a..89fe4e3592f9 100644 --- a/include/linux/nodemask.h +++ b/include/linux/nodemask.h @@ -486,6 +486,7 @@ static inline int num_node_state(enum node_states state) #define first_online_node 0 #define first_memory_node 0 #define next_online_node(nid) (MAX_NUMNODES) +#define next_memory_node(nid) (MAX_NUMNODES) #define nr_node_ids 1U #define nr_online_nodes 1U From patchwork Sat Mar 13 07:57:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 12136571 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8CDE3C433E0 for ; Sat, 13 Mar 2021 07:58:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3751C64F1E for ; Sat, 13 Mar 2021 07:58:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3751C64F1E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 25EF86B0070; Sat, 13 Mar 2021 02:58:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 238F76B0071; Sat, 13 Mar 2021 02:58:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 017D66B0072; Sat, 13 Mar 2021 02:58:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0124.hostedemail.com [216.40.44.124]) by kanga.kvack.org (Postfix) with ESMTP id DC9326B0070 for ; Sat, 13 Mar 2021 02:58:01 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 94E23824805A for ; Sat, 13 Mar 2021 07:58:01 +0000 (UTC) X-FDA: 77914097562.13.8CBB174 Received: from mail-qk1-f202.google.com (mail-qk1-f202.google.com [209.85.222.202]) by imf17.hostedemail.com (Postfix) with ESMTP id 16DE3400805D for ; Sat, 13 Mar 2021 07:57:59 +0000 (UTC) Received: by mail-qk1-f202.google.com with SMTP id k68so19900295qke.2 for ; Fri, 12 Mar 2021 23:58:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=BUGHWaLOgbBXMks9MxevCFPE+HJdHu8WtN3Ad5BKa2E=; b=EmtSmNZcxIc0Qq98PvQLlh/Je74+I9pGiId+AoSzt2WN66X/7gqva2AkM1Z6ZrqWlC qMCo9fu+KgWIl13K9lL0hfZkSMTr33It30mFN/3/xwUcpWXiUy2ttup7BflThw89akrm ipCdNg7GP9J1lKGO0+Ae8TZbHboXPO6EU2DcK5O1kt2ZE3NOFthikv0X6/opyUBdeUKR Q3HhM/pgOA+vQ4UMR7hzNtEDZcmVDtrPgTwq9zDoYDV/KHqaCvxRPvrkCfwY3MbDN9Yc Dm0TaDfpoFKrj/syFqJ/83hwIk0G3OPLs6DbY/I2HkHJ6cbjakxVLFDyXybDitO9MyGA /tow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=BUGHWaLOgbBXMks9MxevCFPE+HJdHu8WtN3Ad5BKa2E=; b=TNZXVYmBp3YCqO85h0QmgHBnUL7rHJoHUKynFHGUnyyGdJYr+T29Cst9T2ROfQSvrn 2NmHz1yJDsI6W6eKav6hK/v6USInGbia3DLKkk+7goD09dEUuAGwhfIKRpxUJMfgNiCc Cvk6q9Mhv1URE5uL7qIAL6f9+NKSHaE5E0tZnmc62QBJdxKv/UgJPXpVR0J6PMjqmH+o vxWV23L5UdjodcHVSwstfXPtOeyZCbmuorfPIPesRSshkB6G0REkVvx6L+RpA93TLfhu B6WjPQjM8LFgU8Tg0uBP8Cozjiz2stfuVQgJVJEOGsmpvhcCTjGe54FYD5cDmlwdTNQV 7YNw== X-Gm-Message-State: AOAM530bUnHNwvzpcdss8RrT1s7PGpFiIJZTPcw3PSZ9+VfUoJqsMiag s89IT4vZYbzeYqetV6FmsOb/rx9ZOE6gskPwpIvVQsH57WIz73/wA7r+GnhA8iIZQO14/EewiqC YIx0FncbawELwNx2LMtzhFewF7AN9wWiYLMjf6TObgcuVMHcDvQdHCEM6 X-Google-Smtp-Source: ABdhPJzJJ1SnzBtj3bVJq+tnpS8mvob3Iwgd7McsgFl/pDIRa/R9w7/nQuY0bxlcdhV2mXDJvzzuC77PvVA= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:f931:d3e4:faa0:4f74]) (user=yuzhao job=sendgmr) by 2002:a05:6214:4b3:: with SMTP id w19mr1750690qvz.26.1615622280155; Fri, 12 Mar 2021 23:58:00 -0800 (PST) Date: Sat, 13 Mar 2021 00:57:36 -0700 In-Reply-To: <20210313075747.3781593-1-yuzhao@google.com> Message-Id: <20210313075747.3781593-4-yuzhao@google.com> Mime-Version: 1.0 References: <20210313075747.3781593-1-yuzhao@google.com> X-Mailer: git-send-email 2.31.0.rc2.261.g7f71774620-goog Subject: [PATCH v1 03/14] include/linux/huge_mm.h: define is_huge_zero_pmd() if !CONFIG_TRANSPARENT_HUGEPAGE From: Yu Zhao To: linux-mm@kvack.org Cc: Alex Shi , Andrew Morton , Dave Hansen , Hillf Danton , Johannes Weiner , Joonsoo Kim , Matthew Wilcox , Mel Gorman , Michal Hocko , Roman Gushchin , Vlastimil Babka , Wei Yang , Yang Shi , Ying Huang , linux-kernel@vger.kernel.org, page-reclaim@google.com, Yu Zhao X-Stat-Signature: joc6fzug3wrwkmiufsu8xf4edkn5ycnz X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 16DE3400805D Received-SPF: none (flex--yuzhao.bounces.google.com>: No applicable sender policy available) receiver=imf17; identity=mailfrom; envelope-from="<3iHBMYAYKCKMbXcKDRJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--yuzhao.bounces.google.com>"; helo=mail-qk1-f202.google.com; client-ip=209.85.222.202 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615622279-813684 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently is_huge_zero_pmd() only exists when CONFIG_TRANSPARENT_HUGEPAGE=y. This patch defines the function for the !CONFIG_TRANSPARENT_HUGEPAGE case. Signed-off-by: Yu Zhao --- include/linux/huge_mm.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index ba973efcd369..0ba7b3f9029c 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -443,6 +443,11 @@ static inline bool is_huge_zero_page(struct page *page) return false; } +static inline bool is_huge_zero_pmd(pmd_t pmd) +{ + return false; +} + static inline bool is_huge_zero_pud(pud_t pud) { return false; From patchwork Sat Mar 13 07:57:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 12136573 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6D42C433E0 for ; Sat, 13 Mar 2021 07:58:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7018B64F1D for ; Sat, 13 Mar 2021 07:58:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7018B64F1D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 790BB6B0071; Sat, 13 Mar 2021 02:58:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 743396B0072; Sat, 13 Mar 2021 02:58:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 592196B0073; Sat, 13 Mar 2021 02:58:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0239.hostedemail.com [216.40.44.239]) by kanga.kvack.org (Postfix) with ESMTP id 3F95D6B0071 for ; Sat, 13 Mar 2021 02:58:03 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id E712FA2DA for ; Sat, 13 Mar 2021 07:58:02 +0000 (UTC) X-FDA: 77914097604.22.0516851 Received: from mail-qk1-f201.google.com (mail-qk1-f201.google.com [209.85.222.201]) by imf18.hostedemail.com (Postfix) with ESMTP id 89CB72001E0D for ; Sat, 13 Mar 2021 07:58:03 +0000 (UTC) Received: by mail-qk1-f201.google.com with SMTP id b78so19897674qkg.13 for ; Fri, 12 Mar 2021 23:58:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=aLRTkKE+4395hmMwkgIvoFPbmsRrSNB3cb7TyZ2ydVo=; b=RbyWeK4rUqhSF6l4+WOn4bz/6l7Kc8FPXxW3gI+7y8uPJAZ1QX8cf/I1Awt1gV/SbI bjnyW7mGxY1NMOZzzbS/+Pu/wZOk8PcdLyHQjU8FYS7MY3rlxWHPLiUeDkvVnQXqR/vU VjFNQgX24G3KsIyOvy72WKVvMUMe73K7lMeGcaq5JwzYtlJpwJmAq7im1mo7v1rCTUgA 0mo4ifiQHDGxfCFwMiqlcmL9XCp3IrCMWE8XU4ROv4uXfZw0LtfhovB0FFVhgU4OaGPo 9xkxh+e9HnpJUWha5GVEemFT2phEp+qmiZ0b0RvdNPBXFoxHBjwLSKOp/alc26idQnA2 6ehw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=aLRTkKE+4395hmMwkgIvoFPbmsRrSNB3cb7TyZ2ydVo=; b=NeS07J2QzbBQ0vNMpIpH/NyQ6kPLW+iMGqmZjo78b0Tu8VtmeZMS5Nv53kNFBNQ1W+ DJeXtz/KrB1qSfWE04MpplgYbxJ5jkogwVs5Ydb2jzIyck6pnXcIm6rF9Y3W3A16iH2l ikInmmpfTGMHoiTMWHgv5fWZEM9ZnskvcKs+T+hlLRCs74OzG2dm4xbeans1KuB2YhHr 7s0qfRqAGlOacLNTrdy/+U7UTQGZR1o2iCKFTiYvg8RdDftBIivfYoifR9MSF5sXuq/H tfAczK6RsGE/HZCe0Sp8FI9jAu6iSvYM4TNgn/JmuIhgZyR/KcSI7hI187XH99IdI3ut IN2A== X-Gm-Message-State: AOAM530cTgW0tcr6bqiKKqrhiWHVolsDKT5VFBaNXVkzM6mQ6t4lYjV4 2MsM1PnsReEL/6H8vkvUk31dKFkaTjeXj7S1Hh5PaYgrUEN6A0S4OpiAKZ3mdo41Ugvv+t2V1os CAopLZMiFruJTZE5e7vdBvz3iX2HMhyqxyhvc1HL6TM7InfdpHbVZl1+z X-Google-Smtp-Source: ABdhPJwU+Y5hioYI52CJyazw+FjQUFeKf1QbQJTSQIAKTuuIxfYVSs2ErBNlMhmNRx4/c6B7zv8sBwMBzxE= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:f931:d3e4:faa0:4f74]) (user=yuzhao job=sendgmr) by 2002:ad4:5c87:: with SMTP id o7mr1743197qvh.31.1615622281483; Fri, 12 Mar 2021 23:58:01 -0800 (PST) Date: Sat, 13 Mar 2021 00:57:37 -0700 In-Reply-To: <20210313075747.3781593-1-yuzhao@google.com> Message-Id: <20210313075747.3781593-5-yuzhao@google.com> Mime-Version: 1.0 References: <20210313075747.3781593-1-yuzhao@google.com> X-Mailer: git-send-email 2.31.0.rc2.261.g7f71774620-goog Subject: [PATCH v1 04/14] include/linux/cgroup.h: export cgroup_mutex From: Yu Zhao To: linux-mm@kvack.org Cc: Alex Shi , Andrew Morton , Dave Hansen , Hillf Danton , Johannes Weiner , Joonsoo Kim , Matthew Wilcox , Mel Gorman , Michal Hocko , Roman Gushchin , Vlastimil Babka , Wei Yang , Yang Shi , Ying Huang , linux-kernel@vger.kernel.org, page-reclaim@google.com, Yu Zhao X-Stat-Signature: nyki85ut78hxiytzxgk6yx993i5xwrib X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 89CB72001E0D Received-SPF: none (flex--yuzhao.bounces.google.com>: No applicable sender policy available) receiver=imf18; identity=mailfrom; envelope-from="<3iXBMYAYKCKQcYdLESKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--yuzhao.bounces.google.com>"; helo=mail-qk1-f201.google.com; client-ip=209.85.222.201 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615622283-546238 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Export cgroup_mutex so it can be used to synchronize with memcg allocations. Signed-off-by: Yu Zhao --- include/linux/cgroup.h | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h index 4f2f79de083e..bd5744360cfa 100644 --- a/include/linux/cgroup.h +++ b/include/linux/cgroup.h @@ -432,6 +432,18 @@ static inline void cgroup_put(struct cgroup *cgrp) css_put(&cgrp->self); } +extern struct mutex cgroup_mutex; + +static inline void cgroup_lock(void) +{ + mutex_lock(&cgroup_mutex); +} + +static inline void cgroup_unlock(void) +{ + mutex_unlock(&cgroup_mutex); +} + /** * task_css_set_check - obtain a task's css_set with extra access conditions * @task: the task to obtain css_set for @@ -446,7 +458,6 @@ static inline void cgroup_put(struct cgroup *cgrp) * as locks used during the cgroup_subsys::attach() methods. */ #ifdef CONFIG_PROVE_RCU -extern struct mutex cgroup_mutex; extern spinlock_t css_set_lock; #define task_css_set_check(task, __c) \ rcu_dereference_check((task)->cgroups, \ @@ -704,6 +715,8 @@ struct cgroup; static inline u64 cgroup_id(const struct cgroup *cgrp) { return 1; } static inline void css_get(struct cgroup_subsys_state *css) {} static inline void css_put(struct cgroup_subsys_state *css) {} +static inline void cgroup_lock(void) {} +static inline void cgroup_unlock(void) {} static inline int cgroup_attach_task_all(struct task_struct *from, struct task_struct *t) { return 0; } static inline int cgroupstats_build(struct cgroupstats *stats, From patchwork Sat Mar 13 07:57:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 12136575 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D901C433DB for ; Sat, 13 Mar 2021 07:58:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D4EC364F1D for ; Sat, 13 Mar 2021 07:58:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D4EC364F1D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 963B56B0072; Sat, 13 Mar 2021 02:58:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 917DC6B0073; Sat, 13 Mar 2021 02:58:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7DDF06B0074; Sat, 13 Mar 2021 02:58:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0207.hostedemail.com [216.40.44.207]) by kanga.kvack.org (Postfix) with ESMTP id 646676B0072 for ; Sat, 13 Mar 2021 02:58:04 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 2548898A6 for ; Sat, 13 Mar 2021 07:58:04 +0000 (UTC) X-FDA: 77914097688.21.2879E5B Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf21.hostedemail.com (Postfix) with ESMTP id 6CF9DE009A79 for ; Sat, 13 Mar 2021 07:58:01 +0000 (UTC) Received: by mail-yb1-f201.google.com with SMTP id 6so32056594ybq.7 for ; Fri, 12 Mar 2021 23:58:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=773qLo/LiVz4n2L5CNuvuPAbdhL5vI+dQXWkGfq22YM=; b=s62F923cqY3HHxlJ4hYL4HxcRTUA1o1Vmr0HgcffuxiRKFFBC1czWP98NMUIxWHBf1 ZTyWeNYix1pSuyOzyeK0NFpxLIWOmX10rPMvqWp8DuHg1yJhrNIGNko3fZT0atoX3aT9 tvbuoR86gyhnQZ8eh7p7K/l32hNMDiL/9yg2skyWxrtzqXc2LkdkDBaiklyidpzGD9Xo glkEqmmHlh+PbMf1URYMZzEcs1zoLYSWmku5OQ0gpaw9yflCHjp7u8qrrfyCRliYK3I/ Tc+BFkGgrB7u8ficVc0QKkIHZrZFoWgOnbQ0s8MxrT5IfVlLG0WbP3MEYNuJZFSmG8zL SKVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=773qLo/LiVz4n2L5CNuvuPAbdhL5vI+dQXWkGfq22YM=; b=Ep5Y0mJGZnG5vgZYhyWRUtwcRYHRG8IjiTtUBmfUTPoWaxESn5FTXfMrLpW2LuaqkT hFOWWw2RE3XcxIKkClH26A2nNjplVG3xRxh49REQ8efX925FRPhCJNiSQHq772XO4esQ lUVKGAVZIbZY6wbHLoFtCK/XXhTSjh2fceLk1g7iymIEEBCMEqPzAwW1SHIqTF5rIU7F Ln+AfD5bKqRKbc03reA33wSjUqEXIh5S7d2GQbGtcYvvRCZ5fWXxNsP/XnXjwsINHmOu rLgpfGzSYvu2talMvH2K0O1eXA+RxufpNkI5jY61QB1KWswrKNIanP6JPmeVlUyHTaXA xfMw== X-Gm-Message-State: AOAM533JQ9WsgJJllsot5JK96n7TQ3m8pXv02D/7QP4Pcw5WnTf7X93e GO+SNytiCwv0nF+A7Y7UXYwocdARpUzuSpUTTtqhBMNrVbPoaAXtwglR22wkxXW8jzi6D+YEKPn X8asQ7YmPcqTknWVYaP7feN2te7D0KceqUiKRM4EUrnmcw91uft0aSBHi X-Google-Smtp-Source: ABdhPJyM3uGtugEuR+BtqyW7Pcf+QxhuRuUSNVC/LK4e6LJubLVomvaz1At1w7Vl14tsop0cWxsmMMW22Kg= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:f931:d3e4:faa0:4f74]) (user=yuzhao job=sendgmr) by 2002:a25:d84b:: with SMTP id p72mr22707445ybg.272.1615622282832; Fri, 12 Mar 2021 23:58:02 -0800 (PST) Date: Sat, 13 Mar 2021 00:57:38 -0700 In-Reply-To: <20210313075747.3781593-1-yuzhao@google.com> Message-Id: <20210313075747.3781593-6-yuzhao@google.com> Mime-Version: 1.0 References: <20210313075747.3781593-1-yuzhao@google.com> X-Mailer: git-send-email 2.31.0.rc2.261.g7f71774620-goog Subject: [PATCH v1 05/14] mm/swap.c: export activate_page() From: Yu Zhao To: linux-mm@kvack.org Cc: Alex Shi , Andrew Morton , Dave Hansen , Hillf Danton , Johannes Weiner , Joonsoo Kim , Matthew Wilcox , Mel Gorman , Michal Hocko , Roman Gushchin , Vlastimil Babka , Wei Yang , Yang Shi , Ying Huang , linux-kernel@vger.kernel.org, page-reclaim@google.com, Yu Zhao X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 6CF9DE009A79 X-Stat-Signature: 7156xeixe3ozdo3cxar1fqbuddkq98pd Received-SPF: none (flex--yuzhao.bounces.google.com>: No applicable sender policy available) receiver=imf21; identity=mailfrom; envelope-from="<3inBMYAYKCKUdZeMFTLTTLQJ.HTRQNSZc-RRPaFHP.TWL@flex--yuzhao.bounces.google.com>"; helo=mail-yb1-f201.google.com; client-ip=209.85.219.201 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615622281-766122 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Export activate_page(), which is a merger between the existing activate_page() and __lru_cache_activate_page(), so it can be used to activate pages that are already on lru or queued in lru_pvecs.lru_add. Signed-off-by: Yu Zhao --- include/linux/swap.h | 1 + mm/swap.c | 28 +++++++++++++++------------- 2 files changed, 16 insertions(+), 13 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 4cc6ec3bf0ab..de2bbbf181ba 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -344,6 +344,7 @@ extern void lru_add_drain_cpu(int cpu); extern void lru_add_drain_cpu_zone(struct zone *zone); extern void lru_add_drain_all(void); extern void rotate_reclaimable_page(struct page *page); +extern void activate_page(struct page *page); extern void deactivate_file_page(struct page *page); extern void deactivate_page(struct page *page); extern void mark_page_lazyfree(struct page *page); diff --git a/mm/swap.c b/mm/swap.c index 31b844d4ed94..f20ed56ebbbf 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -334,7 +334,7 @@ static bool need_activate_page_drain(int cpu) return pagevec_count(&per_cpu(lru_pvecs.activate_page, cpu)) != 0; } -static void activate_page(struct page *page) +static void activate_page_on_lru(struct page *page) { page = compound_head(page); if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) { @@ -354,7 +354,7 @@ static inline void activate_page_drain(int cpu) { } -static void activate_page(struct page *page) +static void activate_page_on_lru(struct page *page) { struct lruvec *lruvec; @@ -368,11 +368,22 @@ static void activate_page(struct page *page) } #endif -static void __lru_cache_activate_page(struct page *page) +/* + * If the page is on the LRU, queue it for activation via + * lru_pvecs.activate_page. Otherwise, assume the page is on a + * pagevec, mark it active and it'll be moved to the active + * LRU on the next drain. + */ +void activate_page(struct page *page) { struct pagevec *pvec; int i; + if (PageLRU(page)) { + activate_page_on_lru(page); + return; + } + local_lock(&lru_pvecs.lock); pvec = this_cpu_ptr(&lru_pvecs.lru_add); @@ -421,16 +432,7 @@ void mark_page_accessed(struct page *page) * evictable page accessed has no effect. */ } else if (!PageActive(page)) { - /* - * If the page is on the LRU, queue it for activation via - * lru_pvecs.activate_page. Otherwise, assume the page is on a - * pagevec, mark it active and it'll be moved to the active - * LRU on the next drain. - */ - if (PageLRU(page)) - activate_page(page); - else - __lru_cache_activate_page(page); + activate_page(page); ClearPageReferenced(page); workingset_activation(page); } From patchwork Sat Mar 13 07:57:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 12136577 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC9A9C433DB for ; Sat, 13 Mar 2021 07:58:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6053564F1E for ; Sat, 13 Mar 2021 07:58:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6053564F1E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DAB186B0073; Sat, 13 Mar 2021 02:58:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D824C6B0074; Sat, 13 Mar 2021 02:58:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C4D726B0075; Sat, 13 Mar 2021 02:58:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0190.hostedemail.com [216.40.44.190]) by kanga.kvack.org (Postfix) with ESMTP id A75E66B0073 for ; Sat, 13 Mar 2021 02:58:05 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 663B8824805A for ; Sat, 13 Mar 2021 07:58:05 +0000 (UTC) X-FDA: 77914097730.24.4E29040 Received: from mail-qt1-f202.google.com (mail-qt1-f202.google.com [209.85.160.202]) by imf29.hostedemail.com (Postfix) with ESMTP id 6C036C5F0 for ; Sat, 13 Mar 2021 07:58:01 +0000 (UTC) Received: by mail-qt1-f202.google.com with SMTP id v19so19292153qtw.19 for ; Fri, 12 Mar 2021 23:58:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=9FOrNXENvEBq+xjcXdvWYV76Tj/52PeZfpdHZpBdeTw=; b=CZNFSLn1Vr+7g91u2WdSPY2RASOoMeGVnEu8XS9ogwps7Gq+7F5umE3fWsyowJpBWD /BpSgazEV0uTx/142ccxmLjj6Tc5kR7KGsb79Ptj4azaGNuJBT032A7MXAqita6Xkryl 6IanFkwVS4tC+SsZAZtk+kuQzdp1pO5Pnx+cXwdQzEVQmCkWjuUnjKzoEPGf4IlnkcFb QOa5YU3bEwcfmAFIwkc2tWsdZ2h8rKKycxzT/zHBKI265GiGZGNofgaHIdU33DduoJGc Q3y72dvencUMJG5nccrm3sqMC+9s98wb5AiyP7gwQH8cNu/oUBPDF/k1zMnv1QbL0y4q Wkmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=9FOrNXENvEBq+xjcXdvWYV76Tj/52PeZfpdHZpBdeTw=; b=t2A5QYSJ+lm09GuTY8d8H1pzkaJ66w5uFl+JmHDoAbRYLAI3ifnP5w4HJMxLuQ9cMB srLxp9a4J26T9PyKKyrayFw2iaic83e6fNkYHvugYL7CGF+LpiI0Uvhf8sWkAbHHcWKE x39Driiue75eeR8Q8yJfu73vsHHDPyDPN+Lceh4GpT+dr0BKNZSfeaJCp+dG9gg7Jdew eVc5TBZNO2PLORFy+X0fJISj6nV28u6g+ba65r1qj76c65V+Qz5DF1prRqwi1r9qheqI QtNEPEUpq59xEjbdgPMGLJmtHJGARVLu2nxKqkllrwFWCQ18fKdNIwY1gEXDRlCHNV+f S12A== X-Gm-Message-State: AOAM5333m0YKur3g80yd3kKR0J90n+nhlVj+0XIMS5u0RZ2N7lu1JgIT +ckFGQWzjER1AUEZ+flM2n0T0JRQJPvm51n99BKLfk9TR3uHrR8EPq8HKYFmL2j9+TLKiDUXndz pAU0QyqQoQWlyBIrSq7vlfNmdBcjEhFOBTfj6IHGpFMHePSkV67xAXqTX X-Google-Smtp-Source: ABdhPJx3v9fo9fGIto8kHzW5LTGjVA6UTKRtTeuo4NwnDLlQop9mIBVZK8pHZLGtADbIHe5kSQ6HeS7hVrU= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:f931:d3e4:faa0:4f74]) (user=yuzhao job=sendgmr) by 2002:a0c:cb0c:: with SMTP id o12mr15467752qvk.54.1615622284101; Fri, 12 Mar 2021 23:58:04 -0800 (PST) Date: Sat, 13 Mar 2021 00:57:39 -0700 In-Reply-To: <20210313075747.3781593-1-yuzhao@google.com> Message-Id: <20210313075747.3781593-7-yuzhao@google.com> Mime-Version: 1.0 References: <20210313075747.3781593-1-yuzhao@google.com> X-Mailer: git-send-email 2.31.0.rc2.261.g7f71774620-goog Subject: [PATCH v1 06/14] mm, x86: support the access bit on non-leaf PMD entries From: Yu Zhao To: linux-mm@kvack.org Cc: Alex Shi , Andrew Morton , Dave Hansen , Hillf Danton , Johannes Weiner , Joonsoo Kim , Matthew Wilcox , Mel Gorman , Michal Hocko , Roman Gushchin , Vlastimil Babka , Wei Yang , Yang Shi , Ying Huang , linux-kernel@vger.kernel.org, page-reclaim@google.com, Yu Zhao X-Stat-Signature: 1bqnbgbwsdw9yr9uujbhjze4edyqqm7n X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 6C036C5F0 Received-SPF: none (flex--yuzhao.bounces.google.com>: No applicable sender policy available) receiver=imf29; identity=mailfrom; envelope-from="<3jHBMYAYKCKcfbgOHVNVVNSL.JVTSPUbe-TTRcHJR.VYN@flex--yuzhao.bounces.google.com>"; helo=mail-qt1-f202.google.com; client-ip=209.85.160.202 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615622281-93532 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Some architectures support the accessed bit on non-leaf PMD entries (parents) in addition to leaf PTE entries (children) where pages are mapped, e.g., x86_64 sets the accessed bit on a parent when using it as part of linear-address translation [1]. Page table walkers who are interested in the accessed bit on children can take advantage of this: they do not need to search the children when the accessed bit is not set on a parent, given that they have previously cleared the accessed bit on this parent in addition to its children. [1]: Intel 64 and IA-32 Architectures Software Developer's Manual Volume 3 (October 2019), section 4.8 Signed-off-by: Yu Zhao --- arch/Kconfig | 8 ++++++++ arch/x86/Kconfig | 1 + arch/x86/include/asm/pgtable.h | 2 +- arch/x86/mm/pgtable.c | 5 ++++- include/linux/pgtable.h | 4 ++-- 5 files changed, 16 insertions(+), 4 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 2bb30673d8e6..137446d17732 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -783,6 +783,14 @@ config HAVE_ARCH_TRANSPARENT_HUGEPAGE config HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD bool +config HAVE_ARCH_PARENT_PMD_YOUNG + bool + help + Architectures that select this are able to set the accessed bit on + non-leaf PMD entries in addition to leaf PTE entries where pages are + mapped. For them, page table walkers that clear the accessed bit may + stop at non-leaf PMD entries when they do not see the accessed bit. + config HAVE_ARCH_HUGE_VMAP bool diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 2792879d398e..b5972eb82337 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -163,6 +163,7 @@ config X86 select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRANSPARENT_HUGEPAGE select HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD if X86_64 + select HAVE_ARCH_PARENT_PMD_YOUNG if X86_64 select HAVE_ARCH_USERFAULTFD_WP if X86_64 && USERFAULTFD select HAVE_ARCH_VMAP_STACK if X86_64 select HAVE_ARCH_WITHIN_STACK_FRAMES diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index a02c67291cfc..a6b5cfe1fc5a 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -846,7 +846,7 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd) static inline int pmd_bad(pmd_t pmd) { - return (pmd_flags(pmd) & ~_PAGE_USER) != _KERNPG_TABLE; + return ((pmd_flags(pmd) | _PAGE_ACCESSED) & ~_PAGE_USER) != _KERNPG_TABLE; } static inline unsigned long pages_to_mb(unsigned long npg) diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index f6a9e2e36642..1c27e6f43f80 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -550,7 +550,7 @@ int ptep_test_and_clear_young(struct vm_area_struct *vma, return ret; } -#ifdef CONFIG_TRANSPARENT_HUGEPAGE +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HAVE_ARCH_PARENT_PMD_YOUNG) int pmdp_test_and_clear_young(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmdp) { @@ -562,6 +562,9 @@ int pmdp_test_and_clear_young(struct vm_area_struct *vma, return ret; } +#endif + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE int pudp_test_and_clear_young(struct vm_area_struct *vma, unsigned long addr, pud_t *pudp) { diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 5e772392a379..08dd9b8c055a 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -193,7 +193,7 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, #endif #ifndef __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG -#ifdef CONFIG_TRANSPARENT_HUGEPAGE +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HAVE_ARCH_PARENT_PMD_YOUNG) static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp) @@ -214,7 +214,7 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma, BUILD_BUG(); return 0; } -#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ +#endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_HAVE_ARCH_PARENT_PMD_YOUNG */ #endif #ifndef __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH From patchwork Sat Mar 13 07:57:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 12136579 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77A40C433E0 for ; Sat, 13 Mar 2021 07:58:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0B26664F1D for ; Sat, 13 Mar 2021 07:58:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0B26664F1D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7DCDF6B0074; Sat, 13 Mar 2021 02:58:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 78DF26B0075; Sat, 13 Mar 2021 02:58:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 62ED66B0078; Sat, 13 Mar 2021 02:58:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0084.hostedemail.com [216.40.44.84]) by kanga.kvack.org (Postfix) with ESMTP id 4198E6B0074 for ; Sat, 13 Mar 2021 02:58:07 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id F3B145833 for ; Sat, 13 Mar 2021 07:58:06 +0000 (UTC) X-FDA: 77914097814.14.F7B0AA6 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf23.hostedemail.com (Postfix) with ESMTP id C2353A0049ED for ; Sat, 13 Mar 2021 07:58:05 +0000 (UTC) Received: by mail-yb1-f201.google.com with SMTP id b127so31851052ybc.13 for ; Fri, 12 Mar 2021 23:58:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=sM28HXat3Ro78N+fdELj5tA2TTORYcmSJRB7y0xn668=; b=TkMSfULPVVqqEs8nJoS183e8mdw8yepLBTDn2eNINUqggav0COpVRzISc3zd89NvN9 G1/R3rFQNbXB4Mc0QhWaGTD1Fq53asLndQ1E2i8CSZ95KHVj8MUNkswp9uk2yDDtfErk Sbw+4/WN0WzNqVfegG6JQMHiPP9iXEqGFYVgSZLK8flNYafDsz7FgT4K+/4AxxZLFwj6 mhv5FnKYJxQDSuEX1b3S54OYObhuyWJnGpPGLfPwfJ/quLWGTNO+ZIAeb5KMAmBdnEKV rF1QTgCoPPMsWsShNKe8BsfaSKCMVGiSbi90ZFx8J0HHcn4GpbXrkYUuLSCcm0F4KYI7 fQQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=sM28HXat3Ro78N+fdELj5tA2TTORYcmSJRB7y0xn668=; b=PgsH+LZuquJh6PiITwMqtGOPNVuogsbnzQ+647/z1zP9Bjf3xAy5ihsdcivLeY+i8m mSMlwYZy1Nv9VQ5MR0SdHUvN8HcX9rboP+sMpiaPhdUT5t2wTK58FnrVAWCGGHWMl5kd EdPd77NMq1uUoiMONXzpJCDNUS4l9zkPSPjAyFXEXyWfvAVDdbe1iR2LHDyoQIAgHAoG AX4WsVPt8QGZW0Gi4EVxlSBf9YxxMZ+b01TGviKNJJrveDv1PACl2C05RIcVTXxFP7yu LLyH9gHs1OcB0ZNRWg0O1GEfyj3vmtUn8ojnfOB9qmt7QNpFd+sTTxcjsHJHN8P8WlPY bS3A== X-Gm-Message-State: AOAM531Sd/t0GRhaRqy1i1HKnEJ3jiIXbbi4CXLDSOaGie3uTTZje981 tywBhqDkbuRiUrp2xe3jSt1gQqg5xY31XFBQf7Kkk/l366v1LIheXSfEsWLQXY3NtWspDLMvrcb aoMOACjcU4OGxItT2kSXSj1NGfXeX2bcx9sX8R37k8ySHL9Tg+dvUNyvF X-Google-Smtp-Source: ABdhPJzZYSt4fmXD7VdJ6T+CnbKC2/TjZT+i4G4gEva1Omiq/qzC/4JrVYLGPq2Q22anUOeAEZWsql+skQ8= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:f931:d3e4:faa0:4f74]) (user=yuzhao job=sendgmr) by 2002:a25:2308:: with SMTP id j8mr24384456ybj.474.1615622285438; Fri, 12 Mar 2021 23:58:05 -0800 (PST) Date: Sat, 13 Mar 2021 00:57:40 -0700 In-Reply-To: <20210313075747.3781593-1-yuzhao@google.com> Message-Id: <20210313075747.3781593-8-yuzhao@google.com> Mime-Version: 1.0 References: <20210313075747.3781593-1-yuzhao@google.com> X-Mailer: git-send-email 2.31.0.rc2.261.g7f71774620-goog Subject: [PATCH v1 07/14] mm/pagewalk.c: add pud_entry_post() for post-order traversals From: Yu Zhao To: linux-mm@kvack.org Cc: Alex Shi , Andrew Morton , Dave Hansen , Hillf Danton , Johannes Weiner , Joonsoo Kim , Matthew Wilcox , Mel Gorman , Michal Hocko , Roman Gushchin , Vlastimil Babka , Wei Yang , Yang Shi , Ying Huang , linux-kernel@vger.kernel.org, page-reclaim@google.com, Yu Zhao X-Stat-Signature: pwaoywzcaff7rgpprjghfdto9tozh93z X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: C2353A0049ED Received-SPF: none (flex--yuzhao.bounces.google.com>: No applicable sender policy available) receiver=imf23; identity=mailfrom; envelope-from="<3jXBMYAYKCKggchPIWOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--yuzhao.bounces.google.com>"; helo=mail-yb1-f201.google.com; client-ip=209.85.219.201 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615622285-172303 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a new callback pud_entry_post() to struct mm_walk_ops so that page table walkers can visit the non-leaf PMD entries of a PUD entry after they have visited with the leaf PTE entries. This allows page table walkers who clear the accessed bit to take advantage of the last commit, in a similar way walk_pte_range() works for the PTE entries of a PMD entry: they only need to take PTL once to search all the child entries of a parent entry. Signed-off-by: Yu Zhao --- include/linux/pagewalk.h | 4 ++++ mm/pagewalk.c | 5 +++++ 2 files changed, 9 insertions(+) diff --git a/include/linux/pagewalk.h b/include/linux/pagewalk.h index b1cb6b753abb..2b68ae9d27d3 100644 --- a/include/linux/pagewalk.h +++ b/include/linux/pagewalk.h @@ -11,6 +11,8 @@ struct mm_walk; * @pgd_entry: if set, called for each non-empty PGD (top-level) entry * @p4d_entry: if set, called for each non-empty P4D entry * @pud_entry: if set, called for each non-empty PUD entry + * @pud_entry_post: if set, called for each non-empty PUD entry after + * pmd_entry is called, for post-order traversal. * @pmd_entry: if set, called for each non-empty PMD entry * this handler is required to be able to handle * pmd_trans_huge() pmds. They may simply choose to @@ -41,6 +43,8 @@ struct mm_walk_ops { unsigned long next, struct mm_walk *walk); int (*pud_entry)(pud_t *pud, unsigned long addr, unsigned long next, struct mm_walk *walk); + int (*pud_entry_post)(pud_t *pud, unsigned long addr, + unsigned long next, struct mm_walk *walk); int (*pmd_entry)(pmd_t *pmd, unsigned long addr, unsigned long next, struct mm_walk *walk); int (*pte_entry)(pte_t *pte, unsigned long addr, diff --git a/mm/pagewalk.c b/mm/pagewalk.c index e81640d9f177..8ed1533f7eda 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -160,6 +160,11 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end, err = walk_pmd_range(pud, addr, next, walk); if (err) break; + + if (ops->pud_entry_post) + err = ops->pud_entry_post(pud, addr, next, walk); + if (err) + break; } while (pud++, addr = next, addr != end); return err; From patchwork Sat Mar 13 07:57:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 12136581 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2606EC433DB for ; Sat, 13 Mar 2021 07:58:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BF27064F1D for ; Sat, 13 Mar 2021 07:58:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BF27064F1D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BFC456B0075; Sat, 13 Mar 2021 02:58:08 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BD32E6B0078; Sat, 13 Mar 2021 02:58:08 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A4DF46B007B; Sat, 13 Mar 2021 02:58:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0218.hostedemail.com [216.40.44.218]) by kanga.kvack.org (Postfix) with ESMTP id 8C4096B0075 for ; Sat, 13 Mar 2021 02:58:08 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 3917B1802E60E for ; Sat, 13 Mar 2021 07:58:08 +0000 (UTC) X-FDA: 77914097856.10.DEA378D Received: from mail-qt1-f202.google.com (mail-qt1-f202.google.com [209.85.160.202]) by imf16.hostedemail.com (Postfix) with ESMTP id 5CE7B801B67F for ; Sat, 13 Mar 2021 07:58:07 +0000 (UTC) Received: by mail-qt1-f202.google.com with SMTP id j2so13873352qtv.10 for ; Fri, 12 Mar 2021 23:58:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Y7HI6Iu8cEpN4/cKvfcV1g+GhNxN+RuODBCey3P9ub4=; b=Yxxt5mdzkudc9JV5Q09FDlj0cvp1kmfCXxaVjGD5K0eCTz/So1vsV5U0rgmXDgA+oo I0A6T+8UZJmQiHjXDXECTBckUL0jnXMRgRm7XVaG53YB34iWjqO40F4CKKEh9+/OoVRc G0AmzhscUbBJ/kam3+ejsn6XT8HmEmL1E1JV5ccYQd+ZntwFpRDTuwloc8JJwbzlOPz6 /TmQCN3DlJawauHo+hK8esqlLBqlUo0Hc3ctZf6lI6jfTJ6xP19kgHg147e6ddt/0Gru 4TRJ0LLQctTAhahAckLyUPLY8X0ofJQRXl5IFVx653ZZtEjcg5D9kIYZI+E7lddJGHTM WCSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Y7HI6Iu8cEpN4/cKvfcV1g+GhNxN+RuODBCey3P9ub4=; b=DzSMD2s/KFH4NhNye9dB/vpDnjSjjwOWvMRwseBJpU2fjakmMqT8X0QC+yIwj0fYfB GUNe5eQeqn+IBQO7LDodq2nziKJVUMbk4cjvtpGN92ETR97PU3IUjR7OKhfBZ1fMaCpl Q1nB98AQRFX2Jmbk1NZBak2qxi7Veje/SqjKlNsth4bw1BDMIBnDue041+urlx0XoXt6 YZGDm/xsbN1hSTL6ZyUtliwEtCVZhmOXtA8zeHeQunrVKPPvyU3bTorewAXP8f1uveDr TvbudiD20GzOUH1zo3QEqsFJ7c4RgfBLcqXcplVLDaH2/Sw4uJsb3Ny1dqzdoxJRW+Ej NjkQ== X-Gm-Message-State: AOAM530/8J1U1xFZTnb1BT/iNUQa2wsAbC/HbWfM1BXGV9NrypZt38th HQMUkDL8l9hly9Bg0VazyatogdOPJqEJ5cBs3o2VMSnu1uKYFty8McXTlNdvXCKWd0t9QTiJDpn SeJHXimFtSCiAdYrvybrmfYA2lh76sc+cGGBowxwnFBqtPSo9KQaRv2Az X-Google-Smtp-Source: ABdhPJwaHhrMfJqAPFj0pJIH6XbHZzHvvKGA6xcQa4GVhmvHaXMnKKa3+FEHfHSk+8gEmFOMqKskjMN6bjo= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:f931:d3e4:faa0:4f74]) (user=yuzhao job=sendgmr) by 2002:ad4:50c7:: with SMTP id e7mr1747988qvq.58.1615622286760; Fri, 12 Mar 2021 23:58:06 -0800 (PST) Date: Sat, 13 Mar 2021 00:57:41 -0700 In-Reply-To: <20210313075747.3781593-1-yuzhao@google.com> Message-Id: <20210313075747.3781593-9-yuzhao@google.com> Mime-Version: 1.0 References: <20210313075747.3781593-1-yuzhao@google.com> X-Mailer: git-send-email 2.31.0.rc2.261.g7f71774620-goog Subject: [PATCH v1 08/14] mm/vmscan.c: refactor shrink_node() From: Yu Zhao To: linux-mm@kvack.org Cc: Alex Shi , Andrew Morton , Dave Hansen , Hillf Danton , Johannes Weiner , Joonsoo Kim , Matthew Wilcox , Mel Gorman , Michal Hocko , Roman Gushchin , Vlastimil Babka , Wei Yang , Yang Shi , Ying Huang , linux-kernel@vger.kernel.org, page-reclaim@google.com, Yu Zhao X-Stat-Signature: jtqengniu8ud9so8otak3y99tieanrg8 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 5CE7B801B67F Received-SPF: none (flex--yuzhao.bounces.google.com>: No applicable sender policy available) receiver=imf16; identity=mailfrom; envelope-from="<3jnBMYAYKCKkhdiQJXPXXPUN.LXVURWdg-VVTeJLT.XaP@flex--yuzhao.bounces.google.com>"; helo=mail-qt1-f202.google.com; client-ip=209.85.160.202 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615622287-262942 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Heuristics in shrink_node() are rather independent and can be refactored into a separate function to improve readability. Signed-off-by: Yu Zhao --- mm/vmscan.c | 186 +++++++++++++++++++++++++++------------------------- 1 file changed, 98 insertions(+), 88 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 562e87cbd7a1..1a24d2e0a4cb 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2224,6 +2224,103 @@ enum scan_balance { SCAN_FILE, }; +static void prepare_scan_count(pg_data_t *pgdat, struct scan_control *sc) +{ + unsigned long file; + struct lruvec *target_lruvec; + + target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat); + + /* + * Determine the scan balance between anon and file LRUs. + */ + spin_lock_irq(&target_lruvec->lru_lock); + sc->anon_cost = target_lruvec->anon_cost; + sc->file_cost = target_lruvec->file_cost; + spin_unlock_irq(&target_lruvec->lru_lock); + + /* + * Target desirable inactive:active list ratios for the anon + * and file LRU lists. + */ + if (!sc->force_deactivate) { + unsigned long refaults; + + refaults = lruvec_page_state(target_lruvec, + WORKINGSET_ACTIVATE_ANON); + if (refaults != target_lruvec->refaults[0] || + inactive_is_low(target_lruvec, LRU_INACTIVE_ANON)) + sc->may_deactivate |= DEACTIVATE_ANON; + else + sc->may_deactivate &= ~DEACTIVATE_ANON; + + /* + * When refaults are being observed, it means a new + * workingset is being established. Deactivate to get + * rid of any stale active pages quickly. + */ + refaults = lruvec_page_state(target_lruvec, + WORKINGSET_ACTIVATE_FILE); + if (refaults != target_lruvec->refaults[1] || + inactive_is_low(target_lruvec, LRU_INACTIVE_FILE)) + sc->may_deactivate |= DEACTIVATE_FILE; + else + sc->may_deactivate &= ~DEACTIVATE_FILE; + } else + sc->may_deactivate = DEACTIVATE_ANON | DEACTIVATE_FILE; + + /* + * If we have plenty of inactive file pages that aren't + * thrashing, try to reclaim those first before touching + * anonymous pages. + */ + file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE); + if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE)) + sc->cache_trim_mode = 1; + else + sc->cache_trim_mode = 0; + + /* + * Prevent the reclaimer from falling into the cache trap: as + * cache pages start out inactive, every cache fault will tip + * the scan balance towards the file LRU. And as the file LRU + * shrinks, so does the window for rotation from references. + * This means we have a runaway feedback loop where a tiny + * thrashing file LRU becomes infinitely more attractive than + * anon pages. Try to detect this based on file LRU size. + */ + if (!cgroup_reclaim(sc)) { + unsigned long total_high_wmark = 0; + unsigned long free, anon; + int z; + + free = sum_zone_node_page_state(pgdat->node_id, NR_FREE_PAGES); + file = node_page_state(pgdat, NR_ACTIVE_FILE) + + node_page_state(pgdat, NR_INACTIVE_FILE); + + for (z = 0; z < MAX_NR_ZONES; z++) { + struct zone *zone = &pgdat->node_zones[z]; + + if (!managed_zone(zone)) + continue; + + total_high_wmark += high_wmark_pages(zone); + } + + /* + * Consider anon: if that's low too, this isn't a + * runaway file reclaim problem, but rather just + * extreme pressure. Reclaim as per usual then. + */ + anon = node_page_state(pgdat, NR_INACTIVE_ANON); + + sc->file_is_tiny = + file + free <= total_high_wmark && + !(sc->may_deactivate & DEACTIVATE_ANON) && + anon >> sc->priority; + } +} + /* * Determine how aggressively the anon and file LRU lists should be * scanned. The relative value of each set of LRU lists is determined @@ -2669,7 +2766,6 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) unsigned long nr_reclaimed, nr_scanned; struct lruvec *target_lruvec; bool reclaimable = false; - unsigned long file; target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat); @@ -2679,93 +2775,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) nr_reclaimed = sc->nr_reclaimed; nr_scanned = sc->nr_scanned; - /* - * Determine the scan balance between anon and file LRUs. - */ - spin_lock_irq(&target_lruvec->lru_lock); - sc->anon_cost = target_lruvec->anon_cost; - sc->file_cost = target_lruvec->file_cost; - spin_unlock_irq(&target_lruvec->lru_lock); - - /* - * Target desirable inactive:active list ratios for the anon - * and file LRU lists. - */ - if (!sc->force_deactivate) { - unsigned long refaults; - - refaults = lruvec_page_state(target_lruvec, - WORKINGSET_ACTIVATE_ANON); - if (refaults != target_lruvec->refaults[0] || - inactive_is_low(target_lruvec, LRU_INACTIVE_ANON)) - sc->may_deactivate |= DEACTIVATE_ANON; - else - sc->may_deactivate &= ~DEACTIVATE_ANON; - - /* - * When refaults are being observed, it means a new - * workingset is being established. Deactivate to get - * rid of any stale active pages quickly. - */ - refaults = lruvec_page_state(target_lruvec, - WORKINGSET_ACTIVATE_FILE); - if (refaults != target_lruvec->refaults[1] || - inactive_is_low(target_lruvec, LRU_INACTIVE_FILE)) - sc->may_deactivate |= DEACTIVATE_FILE; - else - sc->may_deactivate &= ~DEACTIVATE_FILE; - } else - sc->may_deactivate = DEACTIVATE_ANON | DEACTIVATE_FILE; - - /* - * If we have plenty of inactive file pages that aren't - * thrashing, try to reclaim those first before touching - * anonymous pages. - */ - file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE); - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE)) - sc->cache_trim_mode = 1; - else - sc->cache_trim_mode = 0; - - /* - * Prevent the reclaimer from falling into the cache trap: as - * cache pages start out inactive, every cache fault will tip - * the scan balance towards the file LRU. And as the file LRU - * shrinks, so does the window for rotation from references. - * This means we have a runaway feedback loop where a tiny - * thrashing file LRU becomes infinitely more attractive than - * anon pages. Try to detect this based on file LRU size. - */ - if (!cgroup_reclaim(sc)) { - unsigned long total_high_wmark = 0; - unsigned long free, anon; - int z; - - free = sum_zone_node_page_state(pgdat->node_id, NR_FREE_PAGES); - file = node_page_state(pgdat, NR_ACTIVE_FILE) + - node_page_state(pgdat, NR_INACTIVE_FILE); - - for (z = 0; z < MAX_NR_ZONES; z++) { - struct zone *zone = &pgdat->node_zones[z]; - if (!managed_zone(zone)) - continue; - - total_high_wmark += high_wmark_pages(zone); - } - - /* - * Consider anon: if that's low too, this isn't a - * runaway file reclaim problem, but rather just - * extreme pressure. Reclaim as per usual then. - */ - anon = node_page_state(pgdat, NR_INACTIVE_ANON); - - sc->file_is_tiny = - file + free <= total_high_wmark && - !(sc->may_deactivate & DEACTIVATE_ANON) && - anon >> sc->priority; - } + prepare_scan_count(pgdat, sc); shrink_node_memcgs(pgdat, sc); From patchwork Sat Mar 13 07:57:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 12136585 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5049AC433DB for ; Sat, 13 Mar 2021 07:58:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D869D6148E for ; Sat, 13 Mar 2021 07:58:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D869D6148E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F0A2B6B007B; Sat, 13 Mar 2021 02:58:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E6C336B007D; Sat, 13 Mar 2021 02:58:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CBB726B0080; Sat, 13 Mar 2021 02:58:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0018.hostedemail.com [216.40.44.18]) by kanga.kvack.org (Postfix) with ESMTP id A8FA86B007D for ; Sat, 13 Mar 2021 02:58:10 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 66C078248068 for ; Sat, 13 Mar 2021 07:58:10 +0000 (UTC) X-FDA: 77914097940.15.12E2565 Received: from mail-qk1-f201.google.com (mail-qk1-f201.google.com [209.85.222.201]) by imf20.hostedemail.com (Postfix) with ESMTP id 581E746B0 for ; Sat, 13 Mar 2021 07:58:05 +0000 (UTC) Received: by mail-qk1-f201.google.com with SMTP id g18so19927214qki.15 for ; Fri, 12 Mar 2021 23:58:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=7aNjMZwXzkZhXFvRmRWQJ8BQ+8lusUpXBFHauhX2ubg=; b=rXTENBp2Eom7kHIkgQlwaM0zAjOFi5gmzyQ9fUwZQJp4tjb12IZlxofeTMBMfAGa/r L4Ghc3L00KAFuV2LaP7uyJH7AsU4qMsIGq1k1CkPhOMdO7EV4BDTbgd4vEf68FTu94xi vXrlWZYOskUlSCRkRuZtatqD65DIQyZkjMKdk2hKBnk6QJmcpPB6RWsgv88u3qVxEBZx iiEhsjInkvzH6qtUYApIn6cqLI7Fd+8G1HrkDMmx13q4PXkdeunv1Az0GMeKsUNzGYYs e4N6HA5c9v+Un/TrJ4yGhAbzwYgJSTU4Xrzr5/9QWPKNcFUkXgh0KosPnfdwtEviCEBv 6pmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=7aNjMZwXzkZhXFvRmRWQJ8BQ+8lusUpXBFHauhX2ubg=; b=OpTM7trYCh4Gixuc/C0b/gciy0TpHYqb/u6LiVPcovqvawLIX/0ILd9zUg15k1l0tv JB660jHfwSMejg9ND5QWAqT0hlrdT3w9ruRxfV3icjKEt/rTMIQfVuQWtjphTm3hT6yU pxCcCFnY9mgGFEZ2Yn5/sXdla+izcSSX3pKh6bOcmeeL43C/QNu2uTJKn3ci9XdwPXdL IF2EJUa2Awg2iHJvjssCLHNETtvLjZPVKnw7z8BfAavEOEj8e6FsA3STgFfDWh8KYFAG BOPe59XuIYDNlgCKndekvk1k7IFoT9GPQS0Eb12CQl+Vl3oP8U5Dc0+U8OXgjVg7kbbE gfBg== X-Gm-Message-State: AOAM532cRPj6McLsLe71C26z1+lvkeaNULUe0N3+36R0OIdAlTUqiKpg kj9A4bGUGcXnP3C3Dz+xJLHmy4KlSh5dNwv6jBqICEUU4zjMFBgfZLHcIt9rJ7ffuXf2lIpxY0f 9j+z+L9PuMjKFIts1noq3dSljNK7CSRNwsR4U7IzxIkIr07ekNKTjibWb X-Google-Smtp-Source: ABdhPJxXAZ3oX16iFANwEv0b7ybctnsmZAK24tAjZNdUXRU2Fm50sCFyGpIqOrn3ll+aJz5mcnB7+tMAtOQ= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:f931:d3e4:faa0:4f74]) (user=yuzhao job=sendgmr) by 2002:a0c:b8a3:: with SMTP id y35mr15828724qvf.23.1615622288067; Fri, 12 Mar 2021 23:58:08 -0800 (PST) Date: Sat, 13 Mar 2021 00:57:42 -0700 In-Reply-To: <20210313075747.3781593-1-yuzhao@google.com> Message-Id: <20210313075747.3781593-10-yuzhao@google.com> Mime-Version: 1.0 References: <20210313075747.3781593-1-yuzhao@google.com> X-Mailer: git-send-email 2.31.0.rc2.261.g7f71774620-goog Subject: [PATCH v1 09/14] mm: multigenerational lru: mm_struct list From: Yu Zhao To: linux-mm@kvack.org Cc: Alex Shi , Andrew Morton , Dave Hansen , Hillf Danton , Johannes Weiner , Joonsoo Kim , Matthew Wilcox , Mel Gorman , Michal Hocko , Roman Gushchin , Vlastimil Babka , Wei Yang , Yang Shi , Ying Huang , linux-kernel@vger.kernel.org, page-reclaim@google.com, Yu Zhao X-Stat-Signature: b4axyq5z4dns7hip7yondgk7rf6ja8j8 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 581E746B0 Received-SPF: none (flex--yuzhao.bounces.google.com>: No applicable sender policy available) receiver=imf20; identity=mailfrom; envelope-from="<3kHBMYAYKCKsjfkSLZRZZRWP.NZXWTYfi-XXVgLNV.ZcR@flex--yuzhao.bounces.google.com>"; helo=mail-qk1-f201.google.com; client-ip=209.85.222.201 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615622285-271431 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add an infrastructure that maintains either a system-wide mm_struct list or per-memcg mm_struct lists. Multiple threads can concurrently work on the same mm_struct list, and each of them will be given a different mm_struct. Those who finish early can optionally wait on the rest after the iterator has reached the end of the list. This infrastructure also tracks whether an mm_struct is being used on any CPUs or has been used since the last time a worker looked at it. In other words, workers will not be given an mm_struct that belongs to a process that has been sleeping. Signed-off-by: Yu Zhao --- fs/exec.c | 2 + include/linux/memcontrol.h | 4 + include/linux/mm_types.h | 135 +++++++++++++++++++ include/linux/mmzone.h | 2 - kernel/exit.c | 1 + kernel/fork.c | 10 ++ kernel/kthread.c | 1 + kernel/sched/core.c | 2 + mm/memcontrol.c | 28 ++++ mm/vmscan.c | 263 +++++++++++++++++++++++++++++++++++++ 10 files changed, 446 insertions(+), 2 deletions(-) diff --git a/fs/exec.c b/fs/exec.c index 18594f11c31f..c691d4d7720c 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -1008,6 +1008,7 @@ static int exec_mmap(struct mm_struct *mm) active_mm = tsk->active_mm; tsk->active_mm = mm; tsk->mm = mm; + lru_gen_add_mm(mm); /* * This prevents preemption while active_mm is being loaded and * it and mm are being updated, which could cause problems for @@ -1018,6 +1019,7 @@ static int exec_mmap(struct mm_struct *mm) if (!IS_ENABLED(CONFIG_ARCH_WANT_IRQS_OFF_ACTIVATE_MM)) local_irq_enable(); activate_mm(active_mm, mm); + lru_gen_switch_mm(active_mm, mm); if (IS_ENABLED(CONFIG_ARCH_WANT_IRQS_OFF_ACTIVATE_MM)) local_irq_enable(); tsk->mm->vmacache_seqnum = 0; diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index f325aeb4b4e8..591557c5b7e2 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -335,6 +335,10 @@ struct mem_cgroup { struct deferred_split deferred_split_queue; #endif +#ifdef CONFIG_LRU_GEN + struct lru_gen_mm_list *mm_list; +#endif + struct mem_cgroup_per_node *nodeinfo[0]; /* WARNING: nodeinfo must be the last member here */ }; diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 0974ad501a47..b8a038a016f2 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -15,6 +15,8 @@ #include #include #include +#include +#include #include @@ -382,6 +384,8 @@ struct core_state { struct completion startup; }; +#define ANON_AND_FILE 2 + struct kioctx_table; struct mm_struct { struct { @@ -560,6 +564,22 @@ struct mm_struct { #ifdef CONFIG_IOMMU_SUPPORT u32 pasid; +#endif +#ifdef CONFIG_LRU_GEN + struct { + /* node of a global or per-memcg mm list */ + struct list_head list; +#ifdef CONFIG_MEMCG + /* points to memcg of the owner task above */ + struct mem_cgroup *memcg; +#endif + /* indicates this mm has been used since last walk */ + nodemask_t nodes[ANON_AND_FILE]; +#ifndef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH + /* number of cpus that are using this mm */ + atomic_t nr_cpus; +#endif + } lru_gen; #endif } __randomize_layout; @@ -587,6 +607,121 @@ static inline cpumask_t *mm_cpumask(struct mm_struct *mm) return (struct cpumask *)&mm->cpu_bitmap; } +#ifdef CONFIG_LRU_GEN + +struct lru_gen_mm_list { + /* head of a global or per-memcg mm list */ + struct list_head head; + /* protects the list */ + spinlock_t lock; + struct { + /* set to max_seq after each round of walk */ + unsigned long cur_seq; + /* next mm on the list to walk */ + struct list_head *iter; + /* to wait for last worker to finish */ + struct wait_queue_head wait; + /* number of concurrent workers */ + int nr_workers; + } nodes[0]; +}; + +void lru_gen_init_mm(struct mm_struct *mm); +void lru_gen_add_mm(struct mm_struct *mm); +void lru_gen_del_mm(struct mm_struct *mm); +#ifdef CONFIG_MEMCG +int lru_gen_alloc_mm_list(struct mem_cgroup *memcg); +void lru_gen_free_mm_list(struct mem_cgroup *memcg); +void lru_gen_migrate_mm(struct mm_struct *mm); +#endif + +/* + * Track usage so mms that haven't been used since last walk can be skipped. + * + * This function introduces a theoretical overhead for each mm switch, but it + * hasn't been measurable. + */ +static inline void lru_gen_switch_mm(struct mm_struct *old, struct mm_struct *new) +{ + int file; + + /* exclude init_mm, efi_mm, etc. */ + if (!core_kernel_data((unsigned long)old)) { + VM_BUG_ON(old == &init_mm); + + for (file = 0; file < ANON_AND_FILE; file++) + nodes_setall(old->lru_gen.nodes[file]); + +#ifndef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH + atomic_dec(&old->lru_gen.nr_cpus); + VM_BUG_ON_MM(atomic_read(&old->lru_gen.nr_cpus) < 0, old); +#endif + } else + VM_BUG_ON_MM(READ_ONCE(old->lru_gen.list.prev) || + READ_ONCE(old->lru_gen.list.next), old); + + if (!core_kernel_data((unsigned long)new)) { + VM_BUG_ON(new == &init_mm); + +#ifndef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH + atomic_inc(&new->lru_gen.nr_cpus); + VM_BUG_ON_MM(atomic_read(&new->lru_gen.nr_cpus) < 0, new); +#endif + } else + VM_BUG_ON_MM(READ_ONCE(new->lru_gen.list.prev) || + READ_ONCE(new->lru_gen.list.next), new); +} + +/* Returns whether the mm is being used on any cpus. */ +static inline bool lru_gen_mm_is_active(struct mm_struct *mm) +{ +#ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH + return !cpumask_empty(mm_cpumask(mm)); +#else + return atomic_read(&mm->lru_gen.nr_cpus); +#endif +} + +#else /* CONFIG_LRU_GEN */ + +static inline void lru_gen_init_mm(struct mm_struct *mm) +{ +} + +static inline void lru_gen_add_mm(struct mm_struct *mm) +{ +} + +static inline void lru_gen_del_mm(struct mm_struct *mm) +{ +} + +#ifdef CONFIG_MEMCG +static inline int lru_gen_alloc_mm_list(struct mem_cgroup *memcg) +{ + return 0; +} + +static inline void lru_gen_free_mm_list(struct mem_cgroup *memcg) +{ +} + +static inline void lru_gen_migrate_mm(struct mm_struct *mm) +{ +} +#endif + +static inline void lru_gen_switch_mm(struct mm_struct *old, struct mm_struct *new) +{ +} + +static inline bool lru_gen_mm_is_active(struct mm_struct *mm) +{ + return false; +} + +#endif /* CONFIG_LRU_GEN */ + struct mmu_gather; extern void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm); extern void tlb_gather_mmu_fullmm(struct mmu_gather *tlb, struct mm_struct *mm); diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 47946cec7584..a99a1050565a 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -285,8 +285,6 @@ static inline bool is_active_lru(enum lru_list lru) return (lru == LRU_ACTIVE_ANON || lru == LRU_ACTIVE_FILE); } -#define ANON_AND_FILE 2 - enum lruvec_flags { LRUVEC_CONGESTED, /* lruvec has many dirty pages * backed by a congested BDI diff --git a/kernel/exit.c b/kernel/exit.c index 04029e35e69a..e4292717ce37 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -422,6 +422,7 @@ void mm_update_next_owner(struct mm_struct *mm) goto retry; } WRITE_ONCE(mm->owner, c); + lru_gen_migrate_mm(mm); task_unlock(c); put_task_struct(c); } diff --git a/kernel/fork.c b/kernel/fork.c index d3171e8e88e5..e261b797955d 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -665,6 +665,7 @@ static void check_mm(struct mm_struct *mm) #if defined(CONFIG_TRANSPARENT_HUGEPAGE) && !USE_SPLIT_PMD_PTLOCKS VM_BUG_ON_MM(mm->pmd_huge_pte, mm); #endif + VM_BUG_ON_MM(lru_gen_mm_is_active(mm), mm); } #define allocate_mm() (kmem_cache_alloc(mm_cachep, GFP_KERNEL)) @@ -1047,6 +1048,7 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p, goto fail_nocontext; mm->user_ns = get_user_ns(user_ns); + lru_gen_init_mm(mm); return mm; fail_nocontext: @@ -1089,6 +1091,7 @@ static inline void __mmput(struct mm_struct *mm) } if (mm->binfmt) module_put(mm->binfmt->module); + lru_gen_del_mm(mm); mmdrop(mm); } @@ -2513,6 +2516,13 @@ pid_t kernel_clone(struct kernel_clone_args *args) get_task_struct(p); } + if (IS_ENABLED(CONFIG_LRU_GEN) && !(clone_flags & CLONE_VM)) { + /* lock p to synchronize with memcg migration */ + task_lock(p); + lru_gen_add_mm(p->mm); + task_unlock(p); + } + wake_up_new_task(p); /* forking complete and child started to run, tell ptracer */ diff --git a/kernel/kthread.c b/kernel/kthread.c index 1578973c5740..8da7767bb06a 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c @@ -1303,6 +1303,7 @@ void kthread_use_mm(struct mm_struct *mm) tsk->mm = mm; membarrier_update_current_mm(mm); switch_mm_irqs_off(active_mm, mm, tsk); + lru_gen_switch_mm(active_mm, mm); local_irq_enable(); task_unlock(tsk); #ifdef finish_arch_post_lock_switch diff --git a/kernel/sched/core.c b/kernel/sched/core.c index ca2bb629595f..56274a14ce09 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4308,6 +4308,7 @@ context_switch(struct rq *rq, struct task_struct *prev, * finish_task_switch()'s mmdrop(). */ switch_mm_irqs_off(prev->active_mm, next->mm, next); + lru_gen_switch_mm(prev->active_mm, next->mm); if (!prev->mm) { // from kernel /* will mmdrop() in finish_task_switch(). */ @@ -7599,6 +7600,7 @@ void idle_task_exit(void) if (mm != &init_mm) { switch_mm(mm, &init_mm, current); + lru_gen_switch_mm(mm, &init_mm); finish_arch_post_lock_switch(); } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 845eec01ef9d..5836780fe138 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5209,6 +5209,7 @@ static void __mem_cgroup_free(struct mem_cgroup *memcg) free_mem_cgroup_per_node_info(memcg, node); free_percpu(memcg->vmstats_percpu); free_percpu(memcg->vmstats_local); + lru_gen_free_mm_list(memcg); kfree(memcg); } @@ -5261,6 +5262,9 @@ static struct mem_cgroup *mem_cgroup_alloc(void) if (alloc_mem_cgroup_per_node_info(memcg, node)) goto fail; + if (lru_gen_alloc_mm_list(memcg)) + goto fail; + if (memcg_wb_domain_init(memcg, GFP_KERNEL)) goto fail; @@ -6165,6 +6169,29 @@ static void mem_cgroup_move_task(void) } #endif +#ifdef CONFIG_LRU_GEN +static void mem_cgroup_attach(struct cgroup_taskset *tset) +{ + struct cgroup_subsys_state *css; + struct task_struct *task = NULL; + + cgroup_taskset_for_each_leader(task, css, tset) + ; + + if (!task) + return; + + task_lock(task); + if (task->mm && task->mm->owner == task) + lru_gen_migrate_mm(task->mm); + task_unlock(task); +} +#else +static void mem_cgroup_attach(struct cgroup_taskset *tset) +{ +} +#endif + static int seq_puts_memcg_tunable(struct seq_file *m, unsigned long value) { if (value == PAGE_COUNTER_MAX) @@ -6505,6 +6532,7 @@ struct cgroup_subsys memory_cgrp_subsys = { .css_free = mem_cgroup_css_free, .css_reset = mem_cgroup_css_reset, .can_attach = mem_cgroup_can_attach, + .attach = mem_cgroup_attach, .cancel_attach = mem_cgroup_cancel_attach, .post_attach = mem_cgroup_move_task, .dfl_cftypes = memory_files, diff --git a/mm/vmscan.c b/mm/vmscan.c index 1a24d2e0a4cb..f7657ab0d4b7 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4314,3 +4314,266 @@ void check_move_unevictable_pages(struct pagevec *pvec) } } EXPORT_SYMBOL_GPL(check_move_unevictable_pages); + +#ifdef CONFIG_LRU_GEN + +/****************************************************************************** + * global and per-memcg mm list + ******************************************************************************/ + +/* + * After pages are faulted in, they become the youngest generation. They must + * go through aging process twice before they can be evicted. After first scan, + * their accessed bit set during initial faults are cleared and they become the + * second youngest generation. And second scan makes sure they haven't been used + * since the first. + */ +#define MIN_NR_GENS 2 + +static struct lru_gen_mm_list *global_mm_list; + +static struct lru_gen_mm_list *alloc_mm_list(void) +{ + int nid; + struct lru_gen_mm_list *mm_list; + + mm_list = kzalloc(struct_size(mm_list, nodes, nr_node_ids), GFP_KERNEL); + if (!mm_list) + return NULL; + + INIT_LIST_HEAD(&mm_list->head); + spin_lock_init(&mm_list->lock); + + for_each_node(nid) { + mm_list->nodes[nid].cur_seq = MIN_NR_GENS - 1; + mm_list->nodes[nid].iter = &mm_list->head; + init_waitqueue_head(&mm_list->nodes[nid].wait); + } + + return mm_list; +} + +static struct lru_gen_mm_list *get_mm_list(struct mem_cgroup *memcg) +{ +#ifdef CONFIG_MEMCG + if (!mem_cgroup_disabled()) + return memcg ? memcg->mm_list : root_mem_cgroup->mm_list; +#endif + VM_BUG_ON(memcg); + + return global_mm_list; +} + +void lru_gen_init_mm(struct mm_struct *mm) +{ + int file; + + INIT_LIST_HEAD(&mm->lru_gen.list); +#ifdef CONFIG_MEMCG + mm->lru_gen.memcg = NULL; +#endif +#ifndef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH + atomic_set(&mm->lru_gen.nr_cpus, 0); +#endif + for (file = 0; file < ANON_AND_FILE; file++) + nodes_clear(mm->lru_gen.nodes[file]); +} + +void lru_gen_add_mm(struct mm_struct *mm) +{ + struct mem_cgroup *memcg = get_mem_cgroup_from_mm(mm); + struct lru_gen_mm_list *mm_list = get_mm_list(memcg); + + VM_BUG_ON_MM(!list_empty(&mm->lru_gen.list), mm); +#ifdef CONFIG_MEMCG + VM_BUG_ON_MM(mm->lru_gen.memcg, mm); + WRITE_ONCE(mm->lru_gen.memcg, memcg); +#endif + spin_lock(&mm_list->lock); + list_add_tail(&mm->lru_gen.list, &mm_list->head); + spin_unlock(&mm_list->lock); +} + +void lru_gen_del_mm(struct mm_struct *mm) +{ + int nid; +#ifdef CONFIG_MEMCG + struct lru_gen_mm_list *mm_list = get_mm_list(mm->lru_gen.memcg); +#else + struct lru_gen_mm_list *mm_list = get_mm_list(NULL); +#endif + + spin_lock(&mm_list->lock); + + for_each_node(nid) { + if (mm_list->nodes[nid].iter != &mm->lru_gen.list) + continue; + + mm_list->nodes[nid].iter = mm_list->nodes[nid].iter->next; + if (mm_list->nodes[nid].iter == &mm_list->head) + WRITE_ONCE(mm_list->nodes[nid].cur_seq, + mm_list->nodes[nid].cur_seq + 1); + } + + list_del_init(&mm->lru_gen.list); + + spin_unlock(&mm_list->lock); + +#ifdef CONFIG_MEMCG + mem_cgroup_put(mm->lru_gen.memcg); + WRITE_ONCE(mm->lru_gen.memcg, NULL); +#endif +} + +#ifdef CONFIG_MEMCG +int lru_gen_alloc_mm_list(struct mem_cgroup *memcg) +{ + if (mem_cgroup_disabled()) + return 0; + + memcg->mm_list = alloc_mm_list(); + + return memcg->mm_list ? 0 : -ENOMEM; +} + +void lru_gen_free_mm_list(struct mem_cgroup *memcg) +{ + kfree(memcg->mm_list); + memcg->mm_list = NULL; +} + +void lru_gen_migrate_mm(struct mm_struct *mm) +{ + struct mem_cgroup *memcg; + + lockdep_assert_held(&mm->owner->alloc_lock); + + if (mem_cgroup_disabled()) + return; + + rcu_read_lock(); + memcg = mem_cgroup_from_task(mm->owner); + rcu_read_unlock(); + if (memcg == mm->lru_gen.memcg) + return; + + VM_BUG_ON_MM(!mm->lru_gen.memcg, mm); + VM_BUG_ON_MM(list_empty(&mm->lru_gen.list), mm); + + lru_gen_del_mm(mm); + lru_gen_add_mm(mm); +} + +static bool mm_has_migrated(struct mm_struct *mm, struct mem_cgroup *memcg) +{ + return READ_ONCE(mm->lru_gen.memcg) != memcg; +} +#else +static bool mm_has_migrated(struct mm_struct *mm, struct mem_cgroup *memcg) +{ + return false; +} +#endif + +static bool should_skip_mm(struct mm_struct *mm, int nid, int swappiness) +{ + int file; + unsigned long size = 0; + + if (mm_is_oom_victim(mm)) + return true; + + for (file = !swappiness; file < ANON_AND_FILE; file++) { + if (lru_gen_mm_is_active(mm) || node_isset(nid, mm->lru_gen.nodes[file])) + size += file ? get_mm_counter(mm, MM_FILEPAGES) : + get_mm_counter(mm, MM_ANONPAGES) + + get_mm_counter(mm, MM_SHMEMPAGES); + } + + if (size < SWAP_CLUSTER_MAX) + return true; + + return !mmget_not_zero(mm); +} + +/* To support multiple workers that concurrently walk mm list. */ +static bool get_next_mm(struct lruvec *lruvec, unsigned long next_seq, + int swappiness, struct mm_struct **iter) +{ + bool last = true; + struct mm_struct *mm = NULL; + int nid = lruvec_pgdat(lruvec)->node_id; + struct mem_cgroup *memcg = lruvec_memcg(lruvec); + struct lru_gen_mm_list *mm_list = get_mm_list(memcg); + + if (*iter) + mmput_async(*iter); + else if (next_seq <= READ_ONCE(mm_list->nodes[nid].cur_seq)) + return false; + + spin_lock(&mm_list->lock); + + VM_BUG_ON(next_seq > mm_list->nodes[nid].cur_seq + 1); + VM_BUG_ON(*iter && next_seq < mm_list->nodes[nid].cur_seq); + VM_BUG_ON(*iter && !mm_list->nodes[nid].nr_workers); + + if (next_seq <= mm_list->nodes[nid].cur_seq) { + last = *iter; + goto done; + } + + if (mm_list->nodes[nid].iter == &mm_list->head) { + VM_BUG_ON(*iter || mm_list->nodes[nid].nr_workers); + mm_list->nodes[nid].iter = mm_list->nodes[nid].iter->next; + } + + while (!mm && mm_list->nodes[nid].iter != &mm_list->head) { + mm = list_entry(mm_list->nodes[nid].iter, struct mm_struct, lru_gen.list); + mm_list->nodes[nid].iter = mm_list->nodes[nid].iter->next; + if (should_skip_mm(mm, nid, swappiness)) + mm = NULL; + } + + if (mm_list->nodes[nid].iter == &mm_list->head) + WRITE_ONCE(mm_list->nodes[nid].cur_seq, + mm_list->nodes[nid].cur_seq + 1); +done: + if (*iter && !mm) + mm_list->nodes[nid].nr_workers--; + if (!*iter && mm) + mm_list->nodes[nid].nr_workers++; + + last = last && !mm_list->nodes[nid].nr_workers && + mm_list->nodes[nid].iter == &mm_list->head; + + spin_unlock(&mm_list->lock); + + *iter = mm; + + return last; +} + +/****************************************************************************** + * initialization + ******************************************************************************/ + +static int __init init_lru_gen(void) +{ + if (mem_cgroup_disabled()) { + global_mm_list = alloc_mm_list(); + if (!global_mm_list) { + pr_err("lru_gen: failed to allocate global mm list\n"); + return -ENOMEM; + } + } + + return 0; +}; +/* + * We want to run as early as possible because some debug code, e.g., + * dma_resv_lockdep(), calls mm_alloc() and mmput(). We only depend on mm_kobj, + * which is initialized one stage earlier by postcore_initcall(). + */ +arch_initcall(init_lru_gen); + +#endif /* CONFIG_LRU_GEN */ From patchwork Sat Mar 13 07:57:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 12136591 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60095C433E0 for ; Sat, 13 Mar 2021 07:58:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BC09364F1D for ; Sat, 13 Mar 2021 07:58:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BC09364F1D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1C1AD6B0080; Sat, 13 Mar 2021 02:58:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 19BCD6B0081; Sat, 13 Mar 2021 02:58:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E45628D0001; Sat, 13 Mar 2021 02:58:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0232.hostedemail.com [216.40.44.232]) by kanga.kvack.org (Postfix) with ESMTP id B973F6B0080 for ; Sat, 13 Mar 2021 02:58:14 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 5AF3918019B09 for ; Sat, 13 Mar 2021 07:58:12 +0000 (UTC) X-FDA: 77914098024.18.A67D090 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf02.hostedemail.com (Postfix) with ESMTP id 9EEA6400953E for ; Sat, 13 Mar 2021 07:57:59 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id u1so31974946ybu.14 for ; Fri, 12 Mar 2021 23:58:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=lKQl5pndqzB+9aY2sJazZHwXRaTv0wpZFLyUcYj7GQA=; b=ocmCOVDHi5OSSh/YMeG/LdNp1eeWdPoEMakQGeAxNOTTy4sPlhb0ca4Ygnm14qrLsD 7XlvUWfcVDj+VdeKuBFZ02EJcowKSOpYpfqbnXi30dSRVEXwEh9UEJHsHiDUDrtJMPmN 6yygmHTQqf4ygLaa/NYlNiVWi/Q0IQLt9NYECQdaBfahTxezfSJ5IMvCQffZtqs/lqJZ FI1m5ZDkYiZf082Xl9ELwXTp+u4V/ZvflKPEeeP3UDh01OqomXqhXIAfYogyshWFmyJ7 6IUIbFbalKQBZVgSeoVTfLzQ3nu2sLzqLhWOaS0ioFZEyttu8Dq+JSXK5U4Baz2yL4vT duNA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=lKQl5pndqzB+9aY2sJazZHwXRaTv0wpZFLyUcYj7GQA=; b=V4S/OU/K6WZIcRInPkt/OE/DHb4H1o1VDduY6fKADEGxvC+Jqn7NwaeoOLwSD8T8k4 xA220xRmmKMp9+3nvNQ0aQ0OUwQz2rFj42RMDQTq7ef/acdPfkMRzSuJ3Xy5Dcu6XAoS Ze0SaP9ANWAeLdvQzCDg81s6E3BRimcW0GY+2VECjCxTddopx6tEtd39aNcZqfZDm4R5 Gm6tcy4EW0WH9OffvEp+0EA5By1JdzVednu7hCxxdDcI5kx1whmysrzCVAvlvc/UBw77 7ygRaCjzIrA0ewL3cSsSWUxMwVg9Q+btM8bYLs5Et47ZEpRQb6haLM9EiEH7H3gfB3xh QYAw== X-Gm-Message-State: AOAM532KD+Y15fuZ3QYf82k+nw0ZAIcYTYSVRgf2ziwWU9+LnXXYxNTH bdId08VySvvzlHpekBbJm/nTJ7Pljej31Qq8pR8qYZpkHMP57Zm5gcWB78dm93ovyx6JejqEUBH prlSH3GeFafYuaNGyBVB0MiNBFagpc/PN4TzryKcLBc6tMBWCL8P+4nZ+ X-Google-Smtp-Source: ABdhPJxmaAV3Zuy65DGlCkT6I0DbSJaatE40ypeQ7LndNLV2Hp9Q9Eq4d4osGDWr2nCUrvQqAxFej4TvfJo= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:f931:d3e4:faa0:4f74]) (user=yuzhao job=sendgmr) by 2002:a25:2f97:: with SMTP id v145mr25917083ybv.221.1615622289389; Fri, 12 Mar 2021 23:58:09 -0800 (PST) Date: Sat, 13 Mar 2021 00:57:43 -0700 In-Reply-To: <20210313075747.3781593-1-yuzhao@google.com> Message-Id: <20210313075747.3781593-11-yuzhao@google.com> Mime-Version: 1.0 References: <20210313075747.3781593-1-yuzhao@google.com> X-Mailer: git-send-email 2.31.0.rc2.261.g7f71774620-goog Subject: [PATCH v1 10/14] mm: multigenerational lru: core From: Yu Zhao To: linux-mm@kvack.org Cc: Alex Shi , Andrew Morton , Dave Hansen , Hillf Danton , Johannes Weiner , Joonsoo Kim , Matthew Wilcox , Mel Gorman , Michal Hocko , Roman Gushchin , Vlastimil Babka , Wei Yang , Yang Shi , Ying Huang , linux-kernel@vger.kernel.org, page-reclaim@google.com, Yu Zhao X-Stat-Signature: mowz8keo8ia1r33n34k16e737sua11tw X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 9EEA6400953E Received-SPF: none (flex--yuzhao.bounces.google.com>: No applicable sender policy available) receiver=imf02; identity=mailfrom; envelope-from="<3kXBMYAYKCKwkglTMaSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--yuzhao.bounces.google.com>"; helo=mail-yb1-f202.google.com; client-ip=209.85.219.202 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615622279-115821 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Evictable pages are divided into multiple generations for each lruvec. The youngest generation number is stored in max_seq for both anon and file types as they are aged on an equal footing. The oldest generation numbers are stored in min_seq[2] separately for anon and file types as clean file pages can be evicted regardless of may_swap or may_writepage. Generation numbers are truncated into ilog2(MAX_NR_GENS)+1 bits in order to fit into page->flags. The sliding window technique is used to prevent truncated generation numbers from overlapping. Each truncated generation number is an index to lruvec->evictable.lists[MAX_NR_GENS][ANON_AND_FILE][MAX_NR_ZONES]. Evictable pages are added to the per-zone lists indexed by max_seq or min_seq[2] (modulo MAX_NR_GENS), depending on whether they are being faulted in or read ahead. The workflow comprises two conceptually independent functions: the aging and the eviction. The aging produces young generations. Given an lruvec, the aging walks the mm_struct list associated with this lruvec, i.e., memcg->mm_list or global_mm_list, to scan page tables for referenced pages. Upon finding one, the aging updates its generation number to max_seq. After each round of scan, the aging increments max_seq. Since scans are differential with respect to referenced pages, the cost is roughly proportional to their number. The eviction consumes old generations. Given an lruvec, the eviction scans the pages on the per-zone lists indexed by either of min_seq[2]. It selects a type based on the values of min_seq[2] and swappiness. During a scan, the eviction either sorts or isolates a page, depending on whether the aging has updated its generation number or not. When it finds all the per-zone lists are empty, the eviction increments min_seq[2] indexed by this selected type. The eviction triggers the aging when both of min_seq[2] reaches max_seq-1, assuming both anon and file types are reclaimable. Signed-off-by: Yu Zhao --- include/linux/mm.h | 1 + include/linux/mm_inline.h | 194 +++++ include/linux/mmzone.h | 54 ++ include/linux/page-flags-layout.h | 20 +- mm/huge_memory.c | 3 +- mm/mm_init.c | 13 +- mm/mmzone.c | 2 + mm/swap.c | 4 + mm/swapfile.c | 4 + mm/vmscan.c | 1255 +++++++++++++++++++++++++++++ 10 files changed, 1541 insertions(+), 9 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 77e64e3eac80..ac57ea124fb8 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1070,6 +1070,7 @@ vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf); #define ZONES_PGOFF (NODES_PGOFF - ZONES_WIDTH) #define LAST_CPUPID_PGOFF (ZONES_PGOFF - LAST_CPUPID_WIDTH) #define KASAN_TAG_PGOFF (LAST_CPUPID_PGOFF - KASAN_TAG_WIDTH) +#define LRU_GEN_PGOFF (KASAN_TAG_PGOFF - LRU_GEN_WIDTH) /* * Define the bit shifts to access each section. For non-existent diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index 355ea1ee32bd..2d306cab36bc 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -79,11 +79,199 @@ static __always_inline enum lru_list page_lru(struct page *page) return lru; } +#ifdef CONFIG_LRU_GEN + +#ifdef CONFIG_LRU_GEN_ENABLED +DECLARE_STATIC_KEY_TRUE(lru_gen_static_key); +#define lru_gen_enabled() static_branch_likely(&lru_gen_static_key) +#else +DECLARE_STATIC_KEY_FALSE(lru_gen_static_key); +#define lru_gen_enabled() static_branch_unlikely(&lru_gen_static_key) +#endif + +/* + * Raw generation numbers (seq) from struct lru_gen are in unsigned long and + * therefore (virtually) monotonic; truncated generation numbers (gen) occupy + * at most ilog2(MAX_NR_GENS)+1 bits in page flags and therefore are cyclic. + */ +static inline int lru_gen_from_seq(unsigned long seq) +{ + return seq % MAX_NR_GENS; +} + +/* The youngest and the second youngest generations are considered active. */ +static inline bool lru_gen_is_active(struct lruvec *lruvec, int gen) +{ + unsigned long max_seq = READ_ONCE(lruvec->evictable.max_seq); + + VM_BUG_ON(!max_seq); + VM_BUG_ON(gen >= MAX_NR_GENS); + + return gen == lru_gen_from_seq(max_seq) || gen == lru_gen_from_seq(max_seq - 1); +} + +/* Returns -1 when multigenerational lru is disabled or page is isolated. */ +static inline int page_lru_gen(struct page *page) +{ + return ((READ_ONCE(page->flags) & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1; +} + +/* Update multigenerational lru sizes in addition to active/inactive lru sizes. */ +static inline void lru_gen_update_size(struct page *page, struct lruvec *lruvec, + int old_gen, int new_gen) +{ + int file = page_is_file_lru(page); + int zone = page_zonenum(page); + int delta = thp_nr_pages(page); + enum lru_list lru = LRU_FILE * file; + + lockdep_assert_held(&lruvec->lru_lock); + VM_BUG_ON(old_gen != -1 && old_gen >= MAX_NR_GENS); + VM_BUG_ON(new_gen != -1 && new_gen >= MAX_NR_GENS); + VM_BUG_ON(old_gen == -1 && new_gen == -1); + + if (old_gen >= 0) + WRITE_ONCE(lruvec->evictable.sizes[old_gen][file][zone], + lruvec->evictable.sizes[old_gen][file][zone] - delta); + if (new_gen >= 0) + WRITE_ONCE(lruvec->evictable.sizes[new_gen][file][zone], + lruvec->evictable.sizes[new_gen][file][zone] + delta); + + if (old_gen < 0) { + if (lru_gen_is_active(lruvec, new_gen)) + lru += LRU_ACTIVE; + update_lru_size(lruvec, lru, zone, delta); + return; + } + + if (new_gen < 0) { + if (lru_gen_is_active(lruvec, old_gen)) + lru += LRU_ACTIVE; + update_lru_size(lruvec, lru, zone, -delta); + return; + } + + if (!lru_gen_is_active(lruvec, old_gen) && lru_gen_is_active(lruvec, new_gen)) { + update_lru_size(lruvec, lru, zone, -delta); + update_lru_size(lruvec, lru + LRU_ACTIVE, zone, delta); + } + + /* can't deactivate a page without deleting it first */ + VM_BUG_ON(lru_gen_is_active(lruvec, old_gen) && !lru_gen_is_active(lruvec, new_gen)); +} + +/* Add a page to a multigenerational lru list. Returns true on success. */ +static inline bool page_set_lru_gen(struct page *page, struct lruvec *lruvec, bool front) +{ + int gen; + unsigned long old_flags, new_flags; + int file = page_is_file_lru(page); + int zone = page_zonenum(page); + + if (PageUnevictable(page) || !lruvec->evictable.enabled[file]) + return false; + /* + * If a page is being faulted in, mark it as the youngest generation. + * try_walk_mm_list() may look at the size of the youngest generation + * to determine if a page table walk is needed. + * + * If an unmapped page is being activated, e.g., mark_page_accessed(), + * mark it as the second youngest generation so it won't affect + * try_walk_mm_list(). + * + * If a page is being evicted, i.e., waiting for writeback, mark it + * as the second oldest generation so it won't be scanned again + * immediately. And if there are more than three generations, it won't + * be counted as active either. + * + * If a page is being deactivated, rotated by writeback or allocated + * by readahead, mark it as the oldest generation so it will evicted + * first. + */ + if (PageActive(page) && page_mapped(page)) + gen = lru_gen_from_seq(lruvec->evictable.max_seq); + else if (PageActive(page)) + gen = lru_gen_from_seq(lruvec->evictable.max_seq - 1); + else if (PageReclaim(page)) + gen = lru_gen_from_seq(lruvec->evictable.min_seq[file] + 1); + else + gen = lru_gen_from_seq(lruvec->evictable.min_seq[file]); + + do { + old_flags = READ_ONCE(page->flags); + VM_BUG_ON_PAGE(old_flags & LRU_GEN_MASK, page); + + new_flags = (old_flags & ~(LRU_GEN_MASK | BIT(PG_active) | BIT(PG_workingset))) | + ((gen + 1UL) << LRU_GEN_PGOFF); + /* mark page as workingset if active */ + if (PageActive(page)) + new_flags |= BIT(PG_workingset); + } while (cmpxchg(&page->flags, old_flags, new_flags) != old_flags); + + lru_gen_update_size(page, lruvec, -1, gen); + if (front) + list_add(&page->lru, &lruvec->evictable.lists[gen][file][zone]); + else + list_add_tail(&page->lru, &lruvec->evictable.lists[gen][file][zone]); + + return true; +} + +/* Delete a page from a multigenerational lru list. Returns true on success. */ +static inline bool page_clear_lru_gen(struct page *page, struct lruvec *lruvec) +{ + int gen; + unsigned long old_flags, new_flags; + + do { + old_flags = READ_ONCE(page->flags); + if (!(old_flags & LRU_GEN_MASK)) + return false; + + VM_BUG_ON_PAGE(PageActive(page), page); + VM_BUG_ON_PAGE(PageUnevictable(page), page); + + gen = ((old_flags & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1; + + new_flags = old_flags & ~LRU_GEN_MASK; + /* mark page active accordingly */ + if (lru_gen_is_active(lruvec, gen)) + new_flags |= BIT(PG_active); + } while (cmpxchg(&page->flags, old_flags, new_flags) != old_flags); + + lru_gen_update_size(page, lruvec, gen, -1); + list_del(&page->lru); + + return true; +} + +#else /* CONFIG_LRU_GEN */ + +static inline bool lru_gen_enabled(void) +{ + return false; +} + +static inline bool page_set_lru_gen(struct page *page, struct lruvec *lruvec, bool front) +{ + return false; +} + +static inline bool page_clear_lru_gen(struct page *page, struct lruvec *lruvec) +{ + return false; +} + +#endif /* CONFIG_LRU_GEN */ + static __always_inline void add_page_to_lru_list(struct page *page, struct lruvec *lruvec) { enum lru_list lru = page_lru(page); + if (page_set_lru_gen(page, lruvec, true)) + return; + update_lru_size(lruvec, lru, page_zonenum(page), thp_nr_pages(page)); list_add(&page->lru, &lruvec->lists[lru]); } @@ -93,6 +281,9 @@ static __always_inline void add_page_to_lru_list_tail(struct page *page, { enum lru_list lru = page_lru(page); + if (page_set_lru_gen(page, lruvec, false)) + return; + update_lru_size(lruvec, lru, page_zonenum(page), thp_nr_pages(page)); list_add_tail(&page->lru, &lruvec->lists[lru]); } @@ -100,6 +291,9 @@ static __always_inline void add_page_to_lru_list_tail(struct page *page, static __always_inline void del_page_from_lru_list(struct page *page, struct lruvec *lruvec) { + if (page_clear_lru_gen(page, lruvec)) + return; + list_del(&page->lru); update_lru_size(lruvec, page_lru(page), page_zonenum(page), -thp_nr_pages(page)); diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index a99a1050565a..173083bb846e 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -291,6 +291,56 @@ enum lruvec_flags { */ }; +struct lruvec; + +#define LRU_GEN_MASK ((BIT(LRU_GEN_WIDTH) - 1) << LRU_GEN_PGOFF) + +#ifdef CONFIG_LRU_GEN + +#define MAX_NR_GENS CONFIG_NR_LRU_GENS + +/* + * For a common x86_64 configuration that has 3 zones and 7 generations, + * the size of this struct is 1112; and 4 zones and 15 generations, the + * size is 3048. Though it can be configured to have 6 zones and 63 + * generations, there is unlikely a need for it. + */ +struct lru_gen { + /* aging increments max generation number */ + unsigned long max_seq; + /* eviction increments min generation numbers */ + unsigned long min_seq[ANON_AND_FILE]; + /* birth time of each generation in jiffies */ + unsigned long timestamps[MAX_NR_GENS]; + /* multigenerational lru lists */ + struct list_head lists[MAX_NR_GENS][ANON_AND_FILE][MAX_NR_ZONES]; + /* sizes of multigenerational lru lists in pages */ + unsigned long sizes[MAX_NR_GENS][ANON_AND_FILE][MAX_NR_ZONES]; + /* used with swappiness to determine which to reclaim */ + unsigned long isolated[ANON_AND_FILE]; +#ifdef CONFIG_MEMCG + /* reclaim priority to compare with other memcgs */ + atomic_t priority; +#endif + /* whether multigenerational lru is enabled */ + bool enabled[ANON_AND_FILE]; +}; + +void lru_gen_init_lruvec(struct lruvec *lruvec); +void lru_gen_set_state(bool enable, bool main, bool swap); + +#else /* CONFIG_LRU_GEN */ + +static inline void lru_gen_init_lruvec(struct lruvec *lruvec) +{ +} + +static inline void lru_gen_set_state(bool enable, bool main, bool swap) +{ +} + +#endif /* CONFIG_LRU_GEN */ + struct lruvec { struct list_head lists[NR_LRU_LISTS]; /* per lruvec lru_lock for memcg */ @@ -308,6 +358,10 @@ struct lruvec { unsigned long refaults[ANON_AND_FILE]; /* Various lruvec state flags (enum lruvec_flags) */ unsigned long flags; +#ifdef CONFIG_LRU_GEN + /* unevictable pages are on LRU_UNEVICTABLE */ + struct lru_gen evictable; +#endif #ifdef CONFIG_MEMCG struct pglist_data *pgdat; #endif diff --git a/include/linux/page-flags-layout.h b/include/linux/page-flags-layout.h index 7d4ec26d8a3e..0c24ace9da3c 100644 --- a/include/linux/page-flags-layout.h +++ b/include/linux/page-flags-layout.h @@ -24,6 +24,20 @@ #error ZONES_SHIFT -- too many zones configured adjust calculation #endif +#ifndef CONFIG_LRU_GEN +#define LRU_GEN_WIDTH 0 +#else +#if CONFIG_NR_LRU_GENS < 8 +#define LRU_GEN_WIDTH 3 +#elif CONFIG_NR_LRU_GENS < 16 +#define LRU_GEN_WIDTH 4 +#elif CONFIG_NR_LRU_GENS < 32 +#define LRU_GEN_WIDTH 5 +#else +#define LRU_GEN_WIDTH 6 +#endif +#endif /* CONFIG_LRU_GEN */ + #ifdef CONFIG_SPARSEMEM #include @@ -56,7 +70,7 @@ #define ZONES_WIDTH ZONES_SHIFT -#if SECTIONS_WIDTH+ZONES_WIDTH+NODES_SHIFT <= BITS_PER_LONG - NR_PAGEFLAGS +#if SECTIONS_WIDTH+ZONES_WIDTH+LRU_GEN_WIDTH+NODES_SHIFT <= BITS_PER_LONG - NR_PAGEFLAGS #define NODES_WIDTH NODES_SHIFT #else #ifdef CONFIG_SPARSEMEM_VMEMMAP @@ -83,14 +97,14 @@ #define KASAN_TAG_WIDTH 0 #endif -#if SECTIONS_WIDTH+ZONES_WIDTH+NODES_SHIFT+LAST_CPUPID_SHIFT+KASAN_TAG_WIDTH \ +#if SECTIONS_WIDTH+ZONES_WIDTH+LRU_GEN_WIDTH+NODES_WIDTH+KASAN_TAG_WIDTH+LAST_CPUPID_SHIFT \ <= BITS_PER_LONG - NR_PAGEFLAGS #define LAST_CPUPID_WIDTH LAST_CPUPID_SHIFT #else #define LAST_CPUPID_WIDTH 0 #endif -#if SECTIONS_WIDTH+NODES_WIDTH+ZONES_WIDTH+LAST_CPUPID_WIDTH+KASAN_TAG_WIDTH \ +#if SECTIONS_WIDTH+ZONES_WIDTH+LRU_GEN_WIDTH+NODES_WIDTH+KASAN_TAG_WIDTH+LAST_CPUPID_WIDTH \ > BITS_PER_LONG - NR_PAGEFLAGS #error "Not enough bits in page flags" #endif diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 395c75111d33..be9bf681313c 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2422,7 +2422,8 @@ static void __split_huge_page_tail(struct page *head, int tail, #ifdef CONFIG_64BIT (1L << PG_arch_2) | #endif - (1L << PG_dirty))); + (1L << PG_dirty) | + LRU_GEN_MASK)); /* ->mapping in first tail page is compound_mapcount */ VM_BUG_ON_PAGE(tail > 2 && page_tail->mapping != TAIL_MAPPING, diff --git a/mm/mm_init.c b/mm/mm_init.c index 8e02e865cc65..0b91a25fbdee 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -71,27 +71,30 @@ void __init mminit_verify_pageflags_layout(void) width = shift - SECTIONS_WIDTH - NODES_WIDTH - ZONES_WIDTH - LAST_CPUPID_SHIFT - KASAN_TAG_WIDTH; mminit_dprintk(MMINIT_TRACE, "pageflags_layout_widths", - "Section %d Node %d Zone %d Lastcpupid %d Kasantag %d Flags %d\n", + "Section %d Node %d Zone %d Lastcpupid %d Kasantag %d lru_gen %d Flags %d\n", SECTIONS_WIDTH, NODES_WIDTH, ZONES_WIDTH, LAST_CPUPID_WIDTH, KASAN_TAG_WIDTH, + LRU_GEN_WIDTH, NR_PAGEFLAGS); mminit_dprintk(MMINIT_TRACE, "pageflags_layout_shifts", - "Section %d Node %d Zone %d Lastcpupid %d Kasantag %d\n", + "Section %d Node %d Zone %d Lastcpupid %d Kasantag %d lru_gen %d\n", SECTIONS_SHIFT, NODES_SHIFT, ZONES_SHIFT, LAST_CPUPID_SHIFT, - KASAN_TAG_WIDTH); + KASAN_TAG_WIDTH, + LRU_GEN_WIDTH); mminit_dprintk(MMINIT_TRACE, "pageflags_layout_pgshifts", - "Section %lu Node %lu Zone %lu Lastcpupid %lu Kasantag %lu\n", + "Section %lu Node %lu Zone %lu Lastcpupid %lu Kasantag %lu lru_gen %lu\n", (unsigned long)SECTIONS_PGSHIFT, (unsigned long)NODES_PGSHIFT, (unsigned long)ZONES_PGSHIFT, (unsigned long)LAST_CPUPID_PGSHIFT, - (unsigned long)KASAN_TAG_PGSHIFT); + (unsigned long)KASAN_TAG_PGSHIFT, + (unsigned long)LRU_GEN_PGOFF); mminit_dprintk(MMINIT_TRACE, "pageflags_layout_nodezoneid", "Node/Zone ID: %lu -> %lu\n", (unsigned long)(ZONEID_PGOFF + ZONEID_SHIFT), diff --git a/mm/mmzone.c b/mm/mmzone.c index eb89d6e018e2..2ec0d7793424 100644 --- a/mm/mmzone.c +++ b/mm/mmzone.c @@ -81,6 +81,8 @@ void lruvec_init(struct lruvec *lruvec) for_each_lru(lru) INIT_LIST_HEAD(&lruvec->lists[lru]); + + lru_gen_init_lruvec(lruvec); } #if defined(CONFIG_NUMA_BALANCING) && !defined(LAST_CPUPID_NOT_IN_PAGE_FLAGS) diff --git a/mm/swap.c b/mm/swap.c index f20ed56ebbbf..bd10efe00684 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -300,6 +300,10 @@ void lru_note_cost(struct lruvec *lruvec, bool file, unsigned int nr_pages) void lru_note_cost_page(struct page *page) { + /* multigenerational lru doesn't use any heuristics */ + if (lru_gen_enabled()) + return; + lru_note_cost(mem_cgroup_page_lruvec(page, page_pgdat(page)), page_is_file_lru(page), thp_nr_pages(page)); } diff --git a/mm/swapfile.c b/mm/swapfile.c index 084a5b9a18e5..fe03cfeaa08f 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -2702,6 +2702,8 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) err = 0; atomic_inc(&proc_poll_event); wake_up_interruptible(&proc_poll_wait); + /* stop anon multigenerational lru if it's enabled */ + lru_gen_set_state(false, false, true); out_dput: filp_close(victim, NULL); @@ -3348,6 +3350,8 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) mutex_unlock(&swapon_mutex); atomic_inc(&proc_poll_event); wake_up_interruptible(&proc_poll_wait); + /* start anon multigenerational lru if it's enabled */ + lru_gen_set_state(true, false, true); error = 0; goto out; diff --git a/mm/vmscan.c b/mm/vmscan.c index f7657ab0d4b7..fd49a9a5d7f5 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -49,6 +49,8 @@ #include #include #include +#include +#include #include #include @@ -1110,6 +1112,10 @@ static unsigned int shrink_page_list(struct list_head *page_list, if (!sc->may_unmap && page_mapped(page)) goto keep_locked; + /* in case this page was found accessed after it was isolated */ + if (lru_gen_enabled() && !ignore_references && PageReferenced(page)) + goto activate_locked; + may_enter_fs = (sc->gfp_mask & __GFP_FS) || (PageSwapCache(page) && (sc->gfp_mask & __GFP_IO)); @@ -2229,6 +2235,10 @@ static void prepare_scan_count(pg_data_t *pgdat, struct scan_control *sc) unsigned long file; struct lruvec *target_lruvec; + /* multigenerational lru doesn't use any heuristics */ + if (lru_gen_enabled()) + return; + target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat); /* @@ -2518,6 +2528,19 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, } } +#ifdef CONFIG_LRU_GEN +static void age_lru_gens(struct pglist_data *pgdat, struct scan_control *sc); +static void shrink_lru_gens(struct lruvec *lruvec, struct scan_control *sc); +#else +static void age_lru_gens(struct pglist_data *pgdat, struct scan_control *sc) +{ +} + +static void shrink_lru_gens(struct lruvec *lruvec, struct scan_control *sc) +{ +} +#endif + static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) { unsigned long nr[NR_LRU_LISTS]; @@ -2529,6 +2552,11 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) struct blk_plug plug; bool scan_adjusted; + if (lru_gen_enabled()) { + shrink_lru_gens(lruvec, sc); + return; + } + get_scan_count(lruvec, sc, nr); /* Record the original scan target for proportional adjustments later */ @@ -2995,6 +3023,10 @@ static void snapshot_refaults(struct mem_cgroup *target_memcg, pg_data_t *pgdat) struct lruvec *target_lruvec; unsigned long refaults; + /* multigenerational lru doesn't use any heuristics */ + if (lru_gen_enabled()) + return; + target_lruvec = mem_cgroup_lruvec(target_memcg, pgdat); refaults = lruvec_page_state(target_lruvec, WORKINGSET_ACTIVATE_ANON); target_lruvec->refaults[0] = refaults; @@ -3369,6 +3401,11 @@ static void age_active_anon(struct pglist_data *pgdat, struct mem_cgroup *memcg; struct lruvec *lruvec; + if (lru_gen_enabled()) { + age_lru_gens(pgdat, sc); + return; + } + if (!total_swap_pages) return; @@ -4553,12 +4590,1227 @@ static bool get_next_mm(struct lruvec *lruvec, unsigned long next_seq, return last; } +/****************************************************************************** + * aging (page table walk) + ******************************************************************************/ + +#define DEFINE_MAX_SEQ(lruvec) \ + unsigned long max_seq = READ_ONCE((lruvec)->evictable.max_seq) + +#define DEFINE_MIN_SEQ(lruvec) \ + unsigned long min_seq[ANON_AND_FILE] = { \ + READ_ONCE((lruvec)->evictable.min_seq[0]), \ + READ_ONCE((lruvec)->evictable.min_seq[1]), \ + } + +#define for_each_gen_type_zone(gen, file, zone) \ + for ((gen) = 0; (gen) < MAX_NR_GENS; (gen)++) \ + for ((file) = 0; (file) < ANON_AND_FILE; (file)++) \ + for ((zone) = 0; (zone) < MAX_NR_ZONES; (zone)++) + +#define for_each_type_zone(file, zone) \ + for ((file) = 0; (file) < ANON_AND_FILE; (file)++) \ + for ((zone) = 0; (zone) < MAX_NR_ZONES; (zone)++) + +#define MAX_BATCH_SIZE 8192 + +static DEFINE_PER_CPU(int [MAX_NR_GENS][ANON_AND_FILE][MAX_NR_ZONES], lru_batch_size); + +static void update_batch_size(struct page *page, int old_gen, int new_gen) +{ + int file = page_is_file_lru(page); + int zone = page_zonenum(page); + int delta = thp_nr_pages(page); + + VM_BUG_ON(preemptible()); + VM_BUG_ON(in_interrupt()); + VM_BUG_ON(old_gen >= MAX_NR_GENS); + VM_BUG_ON(new_gen >= MAX_NR_GENS); + + __this_cpu_sub(lru_batch_size[old_gen][file][zone], delta); + __this_cpu_add(lru_batch_size[new_gen][file][zone], delta); +} + +static void reset_batch_size(struct lruvec *lruvec) +{ + int gen, file, zone; + + VM_BUG_ON(preemptible()); + VM_BUG_ON(in_interrupt()); + + spin_lock_irq(&lruvec->lru_lock); + + for_each_gen_type_zone(gen, file, zone) { + enum lru_list lru = LRU_FILE * file; + int total = __this_cpu_read(lru_batch_size[gen][file][zone]); + + if (!total) + continue; + + __this_cpu_write(lru_batch_size[gen][file][zone], 0); + + WRITE_ONCE(lruvec->evictable.sizes[gen][file][zone], + lruvec->evictable.sizes[gen][file][zone] + total); + + if (lru_gen_is_active(lruvec, gen)) + lru += LRU_ACTIVE; + update_lru_size(lruvec, lru, zone, total); + } + + spin_unlock_irq(&lruvec->lru_lock); +} + +static int page_update_lru_gen(struct page *page, int new_gen) +{ + int old_gen; + unsigned long old_flags, new_flags; + + VM_BUG_ON(new_gen >= MAX_NR_GENS); + + do { + old_flags = READ_ONCE(page->flags); + + old_gen = ((old_flags & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1; + if (old_gen < 0) { + /* make sure shrink_page_list() rejects this page */ + if (!PageReferenced(page)) + SetPageReferenced(page); + break; + } + + new_flags = (old_flags & ~LRU_GEN_MASK) | ((new_gen + 1UL) << LRU_GEN_PGOFF); + if (old_flags == new_flags) + break; + } while (cmpxchg(&page->flags, old_flags, new_flags) != old_flags); + + /* sort_page_by_gen() will sort this page during eviction */ + + return old_gen; +} + +struct mm_walk_args { + struct mem_cgroup *memcg; + unsigned long max_seq; + unsigned long next_addr; + unsigned long start_pfn; + unsigned long end_pfn; + unsigned long addr_bitmap; + int node_id; + int batch_size; + bool should_walk[ANON_AND_FILE]; +}; + +static inline unsigned long get_addr_mask(unsigned long addr) +{ + return BIT((addr & ~PUD_MASK) >> ilog2(PUD_SIZE / BITS_PER_LONG)); +} + +static int walk_pte_range(pmd_t *pmdp, unsigned long start, unsigned long end, + struct mm_walk *walk) +{ + pmd_t pmd; + pte_t *pte; + spinlock_t *ptl; + struct mm_walk_args *args = walk->private; + int old_gen, new_gen = lru_gen_from_seq(args->max_seq); + + pmd = pmd_read_atomic(pmdp); + barrier(); + if (!pmd_present(pmd) || pmd_trans_huge(pmd)) + return 0; + + VM_BUG_ON(pmd_huge(pmd) || pmd_devmap(pmd) || is_hugepd(__hugepd(pmd_val(pmd)))); + + if (IS_ENABLED(CONFIG_HAVE_ARCH_PARENT_PMD_YOUNG) && !pmd_young(pmd)) + return 0; + + pte = pte_offset_map_lock(walk->mm, &pmd, start, &ptl); + arch_enter_lazy_mmu_mode(); + + for (; start != end; pte++, start += PAGE_SIZE) { + struct page *page; + unsigned long pfn = pte_pfn(*pte); + + if (!pte_present(*pte) || !pte_young(*pte) || is_zero_pfn(pfn)) + continue; + + /* + * If this pte maps a page from a different node, set the + * bitmap to prevent the accessed bit on its parent pmd from + * being cleared. + */ + if (pfn < args->start_pfn || pfn >= args->end_pfn) { + args->addr_bitmap |= get_addr_mask(start); + continue; + } + + page = compound_head(pte_page(*pte)); + if (page_to_nid(page) != args->node_id) { + args->addr_bitmap |= get_addr_mask(start); + continue; + } + if (page_memcg_rcu(page) != args->memcg) + continue; + + if (ptep_test_and_clear_young(walk->vma, start, pte)) { + old_gen = page_update_lru_gen(page, new_gen); + if (old_gen >= 0 && old_gen != new_gen) { + update_batch_size(page, old_gen, new_gen); + args->batch_size++; + } + } + + if (pte_dirty(*pte) && !PageDirty(page) && + !(PageAnon(page) && PageSwapBacked(page) && !PageSwapCache(page))) + set_page_dirty(page); + } + + arch_leave_lazy_mmu_mode(); + pte_unmap_unlock(pte, ptl); + + return 0; +} + +static int walk_pmd_range(pud_t *pudp, unsigned long start, unsigned long end, + struct mm_walk *walk) +{ + pud_t pud; + pmd_t *pmd; + spinlock_t *ptl; + struct mm_walk_args *args = walk->private; + int old_gen, new_gen = lru_gen_from_seq(args->max_seq); + + pud = READ_ONCE(*pudp); + if (!pud_present(pud) || WARN_ON_ONCE(pud_trans_huge(pud))) + return 0; + + VM_BUG_ON(pud_huge(pud) || pud_devmap(pud) || is_hugepd(__hugepd(pud_val(pud)))); + + if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && + !IS_ENABLED(CONFIG_HAVE_ARCH_PARENT_PMD_YOUNG)) + goto done; + + pmd = pmd_offset(&pud, start); + ptl = pmd_lock(walk->mm, pmd); + arch_enter_lazy_mmu_mode(); + + for (; start != end; pmd++, start = pmd_addr_end(start, end)) { + struct page *page; + unsigned long pfn = pmd_pfn(*pmd); + + if (!pmd_present(*pmd) || !pmd_young(*pmd) || is_huge_zero_pmd(*pmd)) + continue; + + if (!pmd_trans_huge(*pmd)) { + if (!(args->addr_bitmap & get_addr_mask(start)) && + (!(pmd_addr_end(start, end) & ~PMD_MASK) || + !walk->vma->vm_next || + (walk->vma->vm_next->vm_start & PMD_MASK) > end)) + pmdp_test_and_clear_young(walk->vma, start, pmd); + continue; + } + + if (pfn < args->start_pfn || pfn >= args->end_pfn) + continue; + + page = pmd_page(*pmd); + if (page_to_nid(page) != args->node_id) + continue; + if (page_memcg_rcu(page) != args->memcg) + continue; + + if (pmdp_test_and_clear_young(walk->vma, start, pmd)) { + old_gen = page_update_lru_gen(page, new_gen); + if (old_gen >= 0 && old_gen != new_gen) { + update_batch_size(page, old_gen, new_gen); + args->batch_size++; + } + } + + if (pmd_dirty(*pmd) && !PageDirty(page) && + !(PageAnon(page) && PageSwapBacked(page) && !PageSwapCache(page))) + set_page_dirty(page); + } + + arch_leave_lazy_mmu_mode(); + spin_unlock(ptl); +done: + args->addr_bitmap = 0; + + if (args->batch_size < MAX_BATCH_SIZE) + return 0; + + args->next_addr = end; + + return -EAGAIN; +} + +static int should_skip_vma(unsigned long start, unsigned long end, struct mm_walk *walk) +{ + struct vm_area_struct *vma = walk->vma; + struct mm_walk_args *args = walk->private; + + if (vma->vm_flags & (VM_LOCKED | VM_SPECIAL | VM_HUGETLB)) + return true; + + if (!(vma->vm_flags & (VM_READ | VM_WRITE | VM_EXEC))) + return true; + + if (vma_is_anonymous(vma)) + return !args->should_walk[0]; + + if (vma_is_shmem(vma)) + return !args->should_walk[0] || + mapping_unevictable(vma->vm_file->f_mapping); + + return !args->should_walk[1] || vma_is_dax(vma) || + vma == get_gate_vma(vma->vm_mm) || + mapping_unevictable(vma->vm_file->f_mapping); +} + +static void walk_mm(struct lruvec *lruvec, struct mm_struct *mm, int swappiness) +{ + int err; + int file; + struct mem_cgroup *memcg = lruvec_memcg(lruvec); + struct pglist_data *pgdat = lruvec_pgdat(lruvec); + struct mm_walk_args args = {}; + struct mm_walk_ops ops = { + .test_walk = should_skip_vma, + .pmd_entry = walk_pte_range, + .pud_entry_post = walk_pmd_range, + }; + + args.memcg = memcg; + args.max_seq = READ_ONCE(lruvec->evictable.max_seq); + args.next_addr = FIRST_USER_ADDRESS; + args.start_pfn = pgdat->node_start_pfn; + args.end_pfn = pgdat_end_pfn(pgdat); + args.node_id = pgdat->node_id; + + for (file = !swappiness; file < ANON_AND_FILE; file++) + args.should_walk[file] = lru_gen_mm_is_active(mm) || + node_isset(pgdat->node_id, mm->lru_gen.nodes[file]); + + do { + unsigned long start = args.next_addr; + unsigned long end = mm->highest_vm_end; + + err = -EBUSY; + + preempt_disable(); + rcu_read_lock(); + +#ifdef CONFIG_MEMCG + if (memcg && atomic_read(&memcg->moving_account)) + goto contended; +#endif + if (!mmap_read_trylock(mm)) + goto contended; + + args.batch_size = 0; + + err = walk_page_range(mm, start, end, &ops, &args); + + mmap_read_unlock(mm); + + if (args.batch_size) + reset_batch_size(lruvec); +contended: + rcu_read_unlock(); + preempt_enable(); + + cond_resched(); + } while (err == -EAGAIN && !mm_is_oom_victim(mm) && !mm_has_migrated(mm, memcg)); + + if (err) + return; + + for (file = !swappiness; file < ANON_AND_FILE; file++) { + if (args.should_walk[file]) + node_clear(pgdat->node_id, mm->lru_gen.nodes[file]); + } +} + +static void page_inc_lru_gen(struct page *page, struct lruvec *lruvec, bool front) +{ + int old_gen, new_gen; + unsigned long old_flags, new_flags; + int file = page_is_file_lru(page); + int zone = page_zonenum(page); + + old_gen = lru_gen_from_seq(lruvec->evictable.min_seq[file]); + + do { + old_flags = READ_ONCE(page->flags); + new_gen = ((old_flags & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1; + VM_BUG_ON_PAGE(new_gen < 0, page); + if (new_gen >= 0 && new_gen != old_gen) + goto sort; + + new_gen = (old_gen + 1) % MAX_NR_GENS; + new_flags = (old_flags & ~LRU_GEN_MASK) | ((new_gen + 1UL) << LRU_GEN_PGOFF); + /* mark page for reclaim if pending writeback */ + if (front) + new_flags |= BIT(PG_reclaim); + } while (cmpxchg(&page->flags, old_flags, new_flags) != old_flags); + + lru_gen_update_size(page, lruvec, old_gen, new_gen); +sort: + if (front) + list_move(&page->lru, &lruvec->evictable.lists[new_gen][file][zone]); + else + list_move_tail(&page->lru, &lruvec->evictable.lists[new_gen][file][zone]); +} + +static int get_nr_gens(struct lruvec *lruvec, int file) +{ + return lruvec->evictable.max_seq - lruvec->evictable.min_seq[file] + 1; +} + +static bool __maybe_unused seq_is_valid(struct lruvec *lruvec) +{ + lockdep_assert_held(&lruvec->lru_lock); + + return get_nr_gens(lruvec, 0) >= MIN_NR_GENS && + get_nr_gens(lruvec, 0) <= MAX_NR_GENS && + get_nr_gens(lruvec, 1) >= MIN_NR_GENS && + get_nr_gens(lruvec, 1) <= MAX_NR_GENS; +} + +static bool try_inc_min_seq(struct lruvec *lruvec, int file) +{ + int gen, zone; + bool success = false; + + VM_BUG_ON(!seq_is_valid(lruvec)); + + while (get_nr_gens(lruvec, file) > MIN_NR_GENS) { + gen = lru_gen_from_seq(lruvec->evictable.min_seq[file]); + + for (zone = 0; zone < MAX_NR_ZONES; zone++) { + if (!list_empty(&lruvec->evictable.lists[gen][file][zone])) + return success; + } + + lruvec->evictable.isolated[file] = 0; + WRITE_ONCE(lruvec->evictable.min_seq[file], + lruvec->evictable.min_seq[file] + 1); + + success = true; + } + + return success; +} + +static bool inc_min_seq(struct lruvec *lruvec, int file) +{ + int gen, zone; + int batch_size = 0; + + VM_BUG_ON(!seq_is_valid(lruvec)); + + if (get_nr_gens(lruvec, file) != MAX_NR_GENS) + return true; + + gen = lru_gen_from_seq(lruvec->evictable.min_seq[file]); + + for (zone = 0; zone < MAX_NR_ZONES; zone++) { + struct list_head *head = &lruvec->evictable.lists[gen][file][zone]; + + while (!list_empty(head)) { + struct page *page = lru_to_page(head); + + VM_BUG_ON_PAGE(PageTail(page), page); + VM_BUG_ON_PAGE(PageUnevictable(page), page); + VM_BUG_ON_PAGE(PageActive(page), page); + VM_BUG_ON_PAGE(page_is_file_lru(page) != file, page); + VM_BUG_ON_PAGE(page_zonenum(page) != zone, page); + + prefetchw_prev_lru_page(page, head, flags); + + page_inc_lru_gen(page, lruvec, false); + + if (++batch_size == MAX_BATCH_SIZE) + return false; + } + + VM_BUG_ON(lruvec->evictable.sizes[gen][file][zone]); + } + + lruvec->evictable.isolated[file] = 0; + WRITE_ONCE(lruvec->evictable.min_seq[file], + lruvec->evictable.min_seq[file] + 1); + + return true; +} + +static void inc_max_seq(struct lruvec *lruvec) +{ + int gen, file, zone; + + spin_lock_irq(&lruvec->lru_lock); + + VM_BUG_ON(!seq_is_valid(lruvec)); + + for (file = 0; file < ANON_AND_FILE; file++) { + if (try_inc_min_seq(lruvec, file)) + continue; + + while (!inc_min_seq(lruvec, file)) { + spin_unlock_irq(&lruvec->lru_lock); + cond_resched(); + spin_lock_irq(&lruvec->lru_lock); + } + } + + gen = lru_gen_from_seq(lruvec->evictable.max_seq - 1); + for_each_type_zone(file, zone) { + enum lru_list lru = LRU_FILE * file; + long total = lruvec->evictable.sizes[gen][file][zone]; + + WARN_ON_ONCE(total != (int)total); + + update_lru_size(lruvec, lru, zone, total); + update_lru_size(lruvec, lru + LRU_ACTIVE, zone, -total); + } + + gen = lru_gen_from_seq(lruvec->evictable.max_seq + 1); + for_each_type_zone(file, zone) { + VM_BUG_ON(lruvec->evictable.sizes[gen][file][zone]); + VM_BUG_ON(!list_empty(&lruvec->evictable.lists[gen][file][zone])); + } + + WRITE_ONCE(lruvec->evictable.timestamps[gen], jiffies); + /* make sure the birth time is valid when read locklessly */ + smp_store_release(&lruvec->evictable.max_seq, lruvec->evictable.max_seq + 1); + + spin_unlock_irq(&lruvec->lru_lock); +} + +/* Main function used by foreground, background and user-triggered aging. */ +static bool walk_mm_list(struct lruvec *lruvec, unsigned long next_seq, + struct scan_control *sc, int swappiness) +{ + bool last; + struct mm_struct *mm = NULL; + int nid = lruvec_pgdat(lruvec)->node_id; + struct mem_cgroup *memcg = lruvec_memcg(lruvec); + struct lru_gen_mm_list *mm_list = get_mm_list(memcg); + + VM_BUG_ON(next_seq > READ_ONCE(lruvec->evictable.max_seq)); + + /* + * For each walk of the mm list of a memcg, we decrement the priority + * of its lruvec. For each walk of memcgs in kswapd, we increment the + * priorities of all lruvecs. + * + * So if this lruvec has a higher priority (smaller value), it means + * other concurrent reclaimers (global or memcg reclaim) have walked + * its mm list. Skip it for this priority to balance the pressure on + * all memcgs. + */ +#ifdef CONFIG_MEMCG + if (!mem_cgroup_disabled() && !cgroup_reclaim(sc) && + sc->priority > atomic_read(&lruvec->evictable.priority)) + return false; +#endif + + do { + last = get_next_mm(lruvec, next_seq, swappiness, &mm); + if (mm) + walk_mm(lruvec, mm, swappiness); + + cond_resched(); + } while (mm); + + if (!last) { + /* foreground aging prefers not to wait unless "necessary" */ + if (!current_is_kswapd() && sc->priority < DEF_PRIORITY - 2) + wait_event_killable(mm_list->nodes[nid].wait, + next_seq < READ_ONCE(lruvec->evictable.max_seq)); + + return next_seq < READ_ONCE(lruvec->evictable.max_seq); + } + + VM_BUG_ON(next_seq != READ_ONCE(lruvec->evictable.max_seq)); + + inc_max_seq(lruvec); + +#ifdef CONFIG_MEMCG + if (!mem_cgroup_disabled()) + atomic_add_unless(&lruvec->evictable.priority, -1, 0); +#endif + + /* order against inc_max_seq() */ + smp_mb(); + /* either we see any waiters or they will see updated max_seq */ + if (waitqueue_active(&mm_list->nodes[nid].wait)) + wake_up_all(&mm_list->nodes[nid].wait); + + wakeup_flusher_threads(WB_REASON_VMSCAN); + + return true; +} + +/****************************************************************************** + * eviction (lru list scan) + ******************************************************************************/ + +static int max_nr_gens(unsigned long max_seq, unsigned long *min_seq, int swappiness) +{ + return max_seq - min(min_seq[!swappiness], min_seq[1]) + 1; +} + +static bool sort_page_by_gen(struct page *page, struct lruvec *lruvec) +{ + bool success; + int gen = page_lru_gen(page); + int file = page_is_file_lru(page); + int zone = page_zonenum(page); + + VM_BUG_ON_PAGE(gen == -1, page); + + /* a lazy free page that has been written into */ + if (file && PageDirty(page) && PageAnon(page)) { + success = page_clear_lru_gen(page, lruvec); + VM_BUG_ON_PAGE(!success, page); + SetPageSwapBacked(page); + add_page_to_lru_list_tail(page, lruvec); + return true; + } + + /* page_update_lru_gen() has updated the page */ + if (gen != lru_gen_from_seq(lruvec->evictable.min_seq[file])) { + list_move(&page->lru, &lruvec->evictable.lists[gen][file][zone]); + return true; + } + + /* + * A page can't be immediately evicted, and page_inc_lru_gen() will + * mark it for reclaim and hopefully writeback will write it soon. + * + * During page table walk, we call set_page_dirty() on pages that have + * dirty PTEs, which helps account dirty pages so writeback should do + * its job. + */ + if (PageLocked(page) || PageWriteback(page) || (file && PageDirty(page))) { + page_inc_lru_gen(page, lruvec, true); + return true; + } + + return false; +} + +static bool should_skip_page(struct page *page, struct scan_control *sc) +{ + if (!sc->may_unmap && page_mapped(page)) + return true; + + if (!(sc->may_writepage && (sc->gfp_mask & __GFP_IO)) && + (PageDirty(page) || (PageAnon(page) && !PageSwapCache(page)))) + return true; + + if (!get_page_unless_zero(page)) + return true; + + if (!TestClearPageLRU(page)) { + put_page(page); + return true; + } + + return false; +} + +static void isolate_page_by_gen(struct page *page, struct lruvec *lruvec) +{ + bool success; + + success = page_clear_lru_gen(page, lruvec); + VM_BUG_ON_PAGE(!success, page); + + if (PageActive(page)) { + ClearPageActive(page); + /* make sure shrink_page_list() rejects this page */ + if (!PageReferenced(page)) + SetPageReferenced(page); + return; + } + + /* make sure shrink_page_list() doesn't write back this page */ + if (PageReclaim(page)) + ClearPageReclaim(page); + /* make sure shrink_page_list() doesn't reject this page */ + if (PageReferenced(page)) + ClearPageReferenced(page); +} + +static int scan_lru_gen_pages(struct lruvec *lruvec, struct scan_control *sc, + long *nr_to_scan, int file, struct list_head *list) +{ + bool success; + int gen, zone; + enum vm_event_item item; + int sorted = 0; + int scanned = 0; + int isolated = 0; + int batch_size = 0; + + VM_BUG_ON(!list_empty(list)); + + if (get_nr_gens(lruvec, file) == MIN_NR_GENS) + return -ENOENT; + + gen = lru_gen_from_seq(lruvec->evictable.min_seq[file]); + + for (zone = sc->reclaim_idx; zone >= 0; zone--) { + LIST_HEAD(moved); + int skipped = 0; + struct list_head *head = &lruvec->evictable.lists[gen][file][zone]; + + while (!list_empty(head)) { + struct page *page = lru_to_page(head); + int delta = thp_nr_pages(page); + + VM_BUG_ON_PAGE(PageTail(page), page); + VM_BUG_ON_PAGE(PageUnevictable(page), page); + VM_BUG_ON_PAGE(PageActive(page), page); + VM_BUG_ON_PAGE(page_is_file_lru(page) != file, page); + VM_BUG_ON_PAGE(page_zonenum(page) != zone, page); + + prefetchw_prev_lru_page(page, head, flags); + + scanned += delta; + + if (sort_page_by_gen(page, lruvec)) + sorted += delta; + else if (should_skip_page(page, sc)) { + list_move(&page->lru, &moved); + skipped += delta; + } else { + isolate_page_by_gen(page, lruvec); + list_add(&page->lru, list); + isolated += delta; + } + + if (scanned >= *nr_to_scan || isolated >= SWAP_CLUSTER_MAX || + ++batch_size == MAX_BATCH_SIZE) + break; + } + + list_splice(&moved, head); + __count_zid_vm_events(PGSCAN_SKIP, zone, skipped); + + if (scanned >= *nr_to_scan || isolated >= SWAP_CLUSTER_MAX || + batch_size == MAX_BATCH_SIZE) + break; + } + + success = try_inc_min_seq(lruvec, file); + + item = current_is_kswapd() ? PGSCAN_KSWAPD : PGSCAN_DIRECT; + if (!cgroup_reclaim(sc)) + __count_vm_events(item, scanned); + __count_memcg_events(lruvec_memcg(lruvec), item, scanned); + __count_vm_events(PGSCAN_ANON + file, scanned); + + *nr_to_scan -= scanned; + + if (*nr_to_scan <= 0 || success || isolated) + return isolated; + /* + * We may have trouble finding eligible pages due to restrictions from + * reclaim_idx, may_unmap and may_writepage. The following check makes + * sure we won't be stuck if we aren't making enough progress. + */ + return batch_size == MAX_BATCH_SIZE && sorted >= SWAP_CLUSTER_MAX ? 0 : -ENOENT; +} + +static int isolate_lru_gen_pages(struct lruvec *lruvec, struct scan_control *sc, + int swappiness, long *nr_to_scan, int *file, + struct list_head *list) +{ + int i; + int isolated; + DEFINE_MAX_SEQ(lruvec); + DEFINE_MIN_SEQ(lruvec); + + VM_BUG_ON(!seq_is_valid(lruvec)); + + if (max_nr_gens(max_seq, min_seq, swappiness) == MIN_NR_GENS) + return 0; + + /* simply choose a type based on generations and swappiness */ + *file = !swappiness || min_seq[0] > min_seq[1] || + (min_seq[0] == min_seq[1] && + max(lruvec->evictable.isolated[0], 1UL) * (200 - swappiness) > + max(lruvec->evictable.isolated[1], 1UL) * (swappiness - 1)); + + for (i = !swappiness; i < ANON_AND_FILE; i++) { + isolated = scan_lru_gen_pages(lruvec, sc, nr_to_scan, *file, list); + if (isolated >= 0) + break; + + *file = !*file; + } + + if (isolated < 0) + isolated = *nr_to_scan = 0; + + lruvec->evictable.isolated[*file] += isolated; + + return isolated; +} + +/* Main function used by foreground, background and user-triggered eviction. */ +static bool evict_lru_gen_pages(struct lruvec *lruvec, struct scan_control *sc, + int swappiness, long *nr_to_scan) +{ + int file; + int isolated; + int reclaimed; + LIST_HEAD(list); + struct page *page; + enum vm_event_item item; + struct reclaim_stat stat; + struct pglist_data *pgdat = lruvec_pgdat(lruvec); + + spin_lock_irq(&lruvec->lru_lock); + + isolated = isolate_lru_gen_pages(lruvec, sc, swappiness, nr_to_scan, &file, &list); + VM_BUG_ON(list_empty(&list) == !!isolated); + + if (isolated) + __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, isolated); + + spin_unlock_irq(&lruvec->lru_lock); + + if (!isolated) + goto done; + + reclaimed = shrink_page_list(&list, pgdat, sc, &stat, false); + /* + * We have to prevent any pages from being added back to the same list + * it was isolated from. Otherwise we may risk looping on them forever. + */ + list_for_each_entry(page, &list, lru) { + if (!PageReclaim(page) && !PageMlocked(page) && !PageActive(page)) + SetPageActive(page); + } + + spin_lock_irq(&lruvec->lru_lock); + + move_pages_to_lru(lruvec, &list); + + __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -isolated); + + item = current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT; + if (!cgroup_reclaim(sc)) + __count_vm_events(item, reclaimed); + __count_memcg_events(lruvec_memcg(lruvec), item, reclaimed); + __count_vm_events(PGSTEAL_ANON + file, reclaimed); + + spin_unlock_irq(&lruvec->lru_lock); + + mem_cgroup_uncharge_list(&list); + free_unref_page_list(&list); + + sc->nr_reclaimed += reclaimed; +done: + return *nr_to_scan > 0 && sc->nr_reclaimed < sc->nr_to_reclaim; +} + +/****************************************************************************** + * reclaim (aging + eviction) + ******************************************************************************/ + +static unsigned long get_nr_to_scan(struct lruvec *lruvec, struct scan_control *sc, + int swappiness) +{ + int gen, file, zone; + long nr_to_scan = 0; + DEFINE_MAX_SEQ(lruvec); + DEFINE_MIN_SEQ(lruvec); + + lru_add_drain(); + + for (file = !swappiness; file < ANON_AND_FILE; file++) { + unsigned long seq; + + for (seq = min_seq[file]; seq <= max_seq; seq++) { + gen = lru_gen_from_seq(seq); + + for (zone = 0; zone <= sc->reclaim_idx; zone++) + nr_to_scan += READ_ONCE( + lruvec->evictable.sizes[gen][file][zone]); + } + } + + nr_to_scan = max(nr_to_scan, 0L); + nr_to_scan = round_up(nr_to_scan >> sc->priority, SWAP_CLUSTER_MAX); + + if (max_nr_gens(max_seq, min_seq, swappiness) > MIN_NR_GENS) + return nr_to_scan; + + /* kswapd does background aging, i.e., age_lru_gens() */ + if (current_is_kswapd()) + return 0; + + return walk_mm_list(lruvec, max_seq, sc, swappiness) ? nr_to_scan : 0; +} + +static int get_swappiness(struct lruvec *lruvec) +{ + struct mem_cgroup *memcg = lruvec_memcg(lruvec); + int swappiness = mem_cgroup_get_nr_swap_pages(memcg) >= (long)SWAP_CLUSTER_MAX ? + mem_cgroup_swappiness(memcg) : 0; + + VM_BUG_ON(swappiness > 200U); + + return swappiness; +} + +static void shrink_lru_gens(struct lruvec *lruvec, struct scan_control *sc) +{ + struct blk_plug plug; + unsigned long scanned = 0; + struct mem_cgroup *memcg = lruvec_memcg(lruvec); + + blk_start_plug(&plug); + + while (true) { + long nr_to_scan; + int swappiness = sc->may_swap ? get_swappiness(lruvec) : 0; + + nr_to_scan = get_nr_to_scan(lruvec, sc, swappiness) - scanned; + if (nr_to_scan < (long)SWAP_CLUSTER_MAX) + break; + + scanned += nr_to_scan; + + if (!evict_lru_gen_pages(lruvec, sc, swappiness, &nr_to_scan)) + break; + + scanned -= nr_to_scan; + + if (mem_cgroup_below_min(memcg) || + (mem_cgroup_below_low(memcg) && !sc->memcg_low_reclaim)) + break; + + cond_resched(); + } + + blk_finish_plug(&plug); +} + +/****************************************************************************** + * background aging + ******************************************************************************/ + +static int lru_gen_spread = MIN_NR_GENS; + +static int min_nr_gens(unsigned long max_seq, unsigned long *min_seq, int swappiness) +{ + return max_seq - max(min_seq[!swappiness], min_seq[1]) + 1; +} + +static void try_walk_mm_list(struct lruvec *lruvec, struct scan_control *sc) +{ + int gen, file, zone; + long old_and_young[2] = {}; + int spread = READ_ONCE(lru_gen_spread); + int swappiness = get_swappiness(lruvec); + DEFINE_MAX_SEQ(lruvec); + DEFINE_MIN_SEQ(lruvec); + + lru_add_drain(); + + for (file = !swappiness; file < ANON_AND_FILE; file++) { + unsigned long seq; + + for (seq = min_seq[file]; seq <= max_seq; seq++) { + gen = lru_gen_from_seq(seq); + + for (zone = 0; zone < MAX_NR_ZONES; zone++) + old_and_young[seq == max_seq] += READ_ONCE( + lruvec->evictable.sizes[gen][file][zone]); + } + } + + old_and_young[0] = max(old_and_young[0], 0L); + old_and_young[1] = max(old_and_young[1], 0L); + + if (old_and_young[0] + old_and_young[1] < SWAP_CLUSTER_MAX) + return; + + /* try to spread pages out across spread+1 generations */ + if (old_and_young[0] >= old_and_young[1] * spread && + min_nr_gens(max_seq, min_seq, swappiness) > max(spread, MIN_NR_GENS)) + return; + + walk_mm_list(lruvec, max_seq, sc, swappiness); +} + +static void age_lru_gens(struct pglist_data *pgdat, struct scan_control *sc) +{ + struct mem_cgroup *memcg; + + VM_BUG_ON(!current_is_kswapd()); + + memcg = mem_cgroup_iter(NULL, NULL, NULL); + do { + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat); + + if (!mem_cgroup_below_min(memcg) && + (!mem_cgroup_below_low(memcg) || sc->memcg_low_reclaim)) + try_walk_mm_list(lruvec, sc); + +#ifdef CONFIG_MEMCG + if (!mem_cgroup_disabled()) + atomic_add_unless(&lruvec->evictable.priority, 1, DEF_PRIORITY); +#endif + + cond_resched(); + } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL))); +} + +/****************************************************************************** + * state change + ******************************************************************************/ + +#ifdef CONFIG_LRU_GEN_ENABLED +DEFINE_STATIC_KEY_TRUE(lru_gen_static_key); +#else +DEFINE_STATIC_KEY_FALSE(lru_gen_static_key); +#endif + +static DEFINE_MUTEX(lru_gen_state_mutex); +static int lru_gen_nr_swapfiles __read_mostly; + +static bool fill_lru_gen_lists(struct lruvec *lruvec) +{ + enum lru_list lru; + int batch_size = 0; + + for_each_evictable_lru(lru) { + int file = is_file_lru(lru); + bool active = is_active_lru(lru); + struct list_head *head = &lruvec->lists[lru]; + + if (!lruvec->evictable.enabled[file]) + continue; + + while (!list_empty(head)) { + bool success; + struct page *page = lru_to_page(head); + + VM_BUG_ON_PAGE(PageTail(page), page); + VM_BUG_ON_PAGE(PageUnevictable(page), page); + VM_BUG_ON_PAGE(PageActive(page) != active, page); + VM_BUG_ON_PAGE(page_lru_gen(page) != -1, page); + VM_BUG_ON_PAGE(page_is_file_lru(page) != file, page); + + prefetchw_prev_lru_page(page, head, flags); + + del_page_from_lru_list(page, lruvec); + success = page_set_lru_gen(page, lruvec, true); + VM_BUG_ON(!success); + + if (++batch_size == MAX_BATCH_SIZE) + return false; + } + } + + return true; +} + +static bool drain_lru_gen_lists(struct lruvec *lruvec) +{ + int gen, file, zone; + int batch_size = 0; + + for_each_gen_type_zone(gen, file, zone) { + struct list_head *head = &lruvec->evictable.lists[gen][file][zone]; + + if (lruvec->evictable.enabled[file]) + continue; + + while (!list_empty(head)) { + bool success; + struct page *page = lru_to_page(head); + + VM_BUG_ON_PAGE(PageTail(page), page); + VM_BUG_ON_PAGE(PageUnevictable(page), page); + VM_BUG_ON_PAGE(PageActive(page), page); + VM_BUG_ON_PAGE(page_is_file_lru(page) != file, page); + VM_BUG_ON_PAGE(page_zonenum(page) != zone, page); + + prefetchw_prev_lru_page(page, head, flags); + + success = page_clear_lru_gen(page, lruvec); + VM_BUG_ON(!success); + add_page_to_lru_list(page, lruvec); + + if (++batch_size == MAX_BATCH_SIZE) + return false; + } + } + + return true; +} + +static bool __maybe_unused state_is_valid(struct lruvec *lruvec) +{ + int gen, file, zone; + enum lru_list lru; + + for_each_evictable_lru(lru) { + file = is_file_lru(lru); + + if (lruvec->evictable.enabled[file] && + !list_empty(&lruvec->lists[lru])) + return false; + } + + for_each_gen_type_zone(gen, file, zone) { + if (!lruvec->evictable.enabled[file] && + !list_empty(&lruvec->evictable.lists[gen][file][zone])) + return false; + + VM_WARN_ONCE(!lruvec->evictable.enabled[file] && + lruvec->evictable.sizes[gen][file][zone], + "lru_gen: possible unbalanced number of pages"); + } + + return true; +} + +/* + * We enable/disable file multigenerational lru according to the main switch. + * + * For anon multigenerational lru, we only enabled it when main switch is on + * and there is at least one swapfile; we disable it when there is no swapfile + * regardless of the value of the main switch. Otherwise, we may eventually + * run out of generation numbers and have to call inc_min_seq(), which brings + * an unnecessary cost. + */ +void lru_gen_set_state(bool enable, bool main, bool swap) +{ + struct mem_cgroup *memcg; + + mem_hotplug_begin(); + mutex_lock(&lru_gen_state_mutex); + cgroup_lock(); + + main = main && enable != lru_gen_enabled(); + swap = swap && !(enable ? lru_gen_nr_swapfiles++ : --lru_gen_nr_swapfiles); + swap = swap && lru_gen_enabled(); + if (!main && !swap) + goto unlock; + + if (main) { + if (enable) + static_branch_enable(&lru_gen_static_key); + else + static_branch_disable(&lru_gen_static_key); + } + + memcg = mem_cgroup_iter(NULL, NULL, NULL); + do { + int nid; + + for_each_node_state(nid, N_MEMORY) { + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(nid)); + + spin_lock_irq(&lruvec->lru_lock); + + VM_BUG_ON(!seq_is_valid(lruvec)); + VM_BUG_ON(!state_is_valid(lruvec)); + + WRITE_ONCE(lruvec->evictable.enabled[0], + lru_gen_enabled() && lru_gen_nr_swapfiles); + WRITE_ONCE(lruvec->evictable.enabled[1], + lru_gen_enabled()); + + while (!(enable ? fill_lru_gen_lists(lruvec) : + drain_lru_gen_lists(lruvec))) { + spin_unlock_irq(&lruvec->lru_lock); + cond_resched(); + spin_lock_irq(&lruvec->lru_lock); + } + + spin_unlock_irq(&lruvec->lru_lock); + } + + cond_resched(); + } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL))); +unlock: + cgroup_unlock(); + mutex_unlock(&lru_gen_state_mutex); + mem_hotplug_done(); +} + +static int __meminit __maybe_unused +lru_gen_online_mem(struct notifier_block *self, unsigned long action, void *arg) +{ + struct mem_cgroup *memcg; + struct memory_notify *mnb = arg; + int nid = mnb->status_change_nid; + + if (action != MEM_GOING_ONLINE || nid == NUMA_NO_NODE) + return NOTIFY_DONE; + + mutex_lock(&lru_gen_state_mutex); + cgroup_lock(); + + memcg = mem_cgroup_iter(NULL, NULL, NULL); + do { + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(nid)); + + VM_BUG_ON(!seq_is_valid(lruvec)); + VM_BUG_ON(!state_is_valid(lruvec)); + + WRITE_ONCE(lruvec->evictable.enabled[0], + lru_gen_enabled() && lru_gen_nr_swapfiles); + WRITE_ONCE(lruvec->evictable.enabled[1], + lru_gen_enabled()); + } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL))); + + cgroup_unlock(); + mutex_unlock(&lru_gen_state_mutex); + + return NOTIFY_DONE; +} + /****************************************************************************** * initialization ******************************************************************************/ +void lru_gen_init_lruvec(struct lruvec *lruvec) +{ + int i; + int gen, file, zone; + +#ifdef CONFIG_MEMCG + atomic_set(&lruvec->evictable.priority, DEF_PRIORITY); +#endif + + lruvec->evictable.max_seq = MIN_NR_GENS; + lruvec->evictable.enabled[0] = lru_gen_enabled() && lru_gen_nr_swapfiles; + lruvec->evictable.enabled[1] = lru_gen_enabled(); + + for (i = 0; i <= MIN_NR_GENS; i++) + lruvec->evictable.timestamps[i] = jiffies; + + for_each_gen_type_zone(gen, file, zone) + INIT_LIST_HEAD(&lruvec->evictable.lists[gen][file][zone]); +} + static int __init init_lru_gen(void) { + BUILD_BUG_ON(MAX_NR_GENS <= MIN_NR_GENS); + BUILD_BUG_ON(BIT(LRU_GEN_WIDTH) <= MAX_NR_GENS); + if (mem_cgroup_disabled()) { global_mm_list = alloc_mm_list(); if (!global_mm_list) { @@ -4567,6 +5819,9 @@ static int __init init_lru_gen(void) } } + if (hotplug_memory_notifier(lru_gen_online_mem, 0)) + pr_err("lru_gen: failed to subscribe hotplug notifications\n"); + return 0; }; /* From patchwork Sat Mar 13 07:57:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 12136587 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3287CC433E6 for ; Sat, 13 Mar 2021 07:58:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C49A764F1D for ; Sat, 13 Mar 2021 07:58:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C49A764F1D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A001B6B007D; Sat, 13 Mar 2021 02:58:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9B1056B007E; Sat, 13 Mar 2021 02:58:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 806906B0080; Sat, 13 Mar 2021 02:58:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0164.hostedemail.com [216.40.44.164]) by kanga.kvack.org (Postfix) with ESMTP id 5E4036B007D for ; Sat, 13 Mar 2021 02:58:13 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id BF8FA1801BB79 for ; Sat, 13 Mar 2021 07:58:12 +0000 (UTC) X-FDA: 77914098024.08.BA2A0BC Received: from mail-qk1-f202.google.com (mail-qk1-f202.google.com [209.85.222.202]) by imf02.hostedemail.com (Postfix) with ESMTP id 93C8C4001C3E for ; Sat, 13 Mar 2021 07:58:01 +0000 (UTC) Received: by mail-qk1-f202.google.com with SMTP id a137so16236926qkb.20 for ; Fri, 12 Mar 2021 23:58:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=xqbmxYHaO3cMzT5bj/KOIzl+A4iyODs6ga8GzAKSrkE=; b=aCMPh4sfMlM3e6I+5or6zJlbHk6bJJ366JgVLE0jRyU2R9ZqTAdl/pzm6yajVT6ycB R3xLH3mN1h4Agtzp3ZKjLdBAoR0i0R0R6lzXQljBVIbOuqUZFY+sw3o8WmPkGc03xc2m dr44s/QwksFpUKxra33PNDauwhk0aM45ZbRd238UtPStswmrshOSmIlfZiRKTC4jSk1B IT5xjTbNazZfQOZvoiY7Q3k+uwbArdxbxTXjz8ad9h7O5bQfF9wGt4VgjerJbWQKhTRU zgbLNXljWtIdgOpRYmaNN+huMS6V5KkWhB+tZ5LkJGhbn+JmFhmrfjMn35Xnhql3Eveu q92A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=xqbmxYHaO3cMzT5bj/KOIzl+A4iyODs6ga8GzAKSrkE=; b=KJUTrlcsjRH6klBjml481hf+FycyNyV7kNAiEQSGYlX4BhLSv/dJ8CiK6OLqpflGyY jtkIsgRcBMJtDhyGje1concUeeN0L2v72UlsEDj4peVm4dtKBgqSqimXZK3XfXfaAuYZ NtIbIiICfdn7GXAH1WjdfDOoSIuf7+gs/PBA78JfJGXv2rFViqKqVzchQbTpwHMh8Fws fxHeaq3cMIQUWfZOy1VkPfuoh6waAB8E18FjBbv0lHSwWd6/R/6DKLkg2+JsK/53Hcn2 p0UjrQsfa+T4AfQ20kZv68gfyv28ssklMLeZVFUcAYnf2k71fqS4D73zRaNpabBXK9XM ZFBQ== X-Gm-Message-State: AOAM531Q7lCp6L1CAw23Q/jc9mQlXILKQ4tjkkfc1WVm+2ggcTofaN+D DIIMC6DisKVDpLlr9KcYXkfXxlFfqm4v3Rh3BKQjajUayXxZ5USpDSsFGOPipIj3WsLA/WQZe1F PfC44s/X5cWGwPyK8/LW4LYxS4PlkaEPOKMOfld6GsErBzUAV/nwtkt/C X-Google-Smtp-Source: ABdhPJx1/5VDYsBrZetaVUsn197WpRCAdLjUCneommYhd37knsedDkkTX9reu4Vy54B+q98cUeZMGTzp9xY= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:f931:d3e4:faa0:4f74]) (user=yuzhao job=sendgmr) by 2002:ad4:4581:: with SMTP id x1mr1753140qvu.9.1615622290851; Fri, 12 Mar 2021 23:58:10 -0800 (PST) Date: Sat, 13 Mar 2021 00:57:44 -0700 In-Reply-To: <20210313075747.3781593-1-yuzhao@google.com> Message-Id: <20210313075747.3781593-12-yuzhao@google.com> Mime-Version: 1.0 References: <20210313075747.3781593-1-yuzhao@google.com> X-Mailer: git-send-email 2.31.0.rc2.261.g7f71774620-goog Subject: [PATCH v1 11/14] mm: multigenerational lru: page activation From: Yu Zhao To: linux-mm@kvack.org Cc: Alex Shi , Andrew Morton , Dave Hansen , Hillf Danton , Johannes Weiner , Joonsoo Kim , Matthew Wilcox , Mel Gorman , Michal Hocko , Roman Gushchin , Vlastimil Babka , Wei Yang , Yang Shi , Ying Huang , linux-kernel@vger.kernel.org, page-reclaim@google.com, Yu Zhao X-Stat-Signature: wfieb7hkgbhx1n15mmmbyrhgsy8bo6pn X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 93C8C4001C3E Received-SPF: none (flex--yuzhao.bounces.google.com>: No applicable sender policy available) receiver=imf02; identity=mailfrom; envelope-from="<3knBMYAYKCK0lhmUNbTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--yuzhao.bounces.google.com>"; helo=mail-qk1-f202.google.com; client-ip=209.85.222.202 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615622281-485181 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the page fault path, we want to add pages to the per-zone lists index by max_seq as they cannot be evicted without going through the aging first. For anon pages, we rename lru_cache_add_inactive_or_unevictable() to lru_cache_add_page_vma() and add a new parameter, which is set to true in the page fault path, to indicate whether they should be added to the per-zone lists index by max_seq. For page/swap cache, since we cannot differentiate the page fault path from the read ahead path at the time we call lru_cache_add() in add_to_page_cache_lru() and __read_swap_cache_async(), we have to add a new function lru_gen_activate_page(), which is essentially activate_page(), to move pages to the per-zone lists indexed by max_seq at a later time. Hopefully we would find pages we want to activate in lru_pvecs.lru_add and simply set PageActive() on them without having to actually move them. In the reclaim path, pages mapped around a referenced PTE may also have been referenced due to spatial locality. We add a new function lru_gen_scan_around() to scan the vicinity of such a PTE. In addition, we add a new function page_is_active() to tell whether a page is active. We cannot use PageActive() because it is only set on active pages while they are not on multigenerational lru. It is cleared while pages are on multigenerational lru, in order to spare the aging the trouble of clearing it when an active generation becomes inactive. Internally, page_is_active() compares the generation number of a page with max_seq and max_seq-1, which are active generations and protected from the eviction. Other generations, which may or may not exist, are inactive. Signed-off-by: Yu Zhao --- fs/proc/task_mmu.c | 3 ++- include/linux/mm_inline.h | 52 ++++++++++++++++++++++++++++++++++++++ include/linux/mmzone.h | 6 +++++ include/linux/swap.h | 4 +-- kernel/events/uprobes.c | 2 +- mm/huge_memory.c | 2 +- mm/khugepaged.c | 2 +- mm/memory.c | 14 +++++++---- mm/migrate.c | 2 +- mm/rmap.c | 6 +++++ mm/swap.c | 26 +++++++++++-------- mm/swapfile.c | 2 +- mm/userfaultfd.c | 2 +- mm/vmscan.c | 53 ++++++++++++++++++++++++++++++++++++++- 14 files changed, 150 insertions(+), 26 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 3cec6fbef725..7cd173710e76 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -19,6 +19,7 @@ #include #include #include +#include #include #include @@ -1720,7 +1721,7 @@ static void gather_stats(struct page *page, struct numa_maps *md, int pte_dirty, if (PageSwapCache(page)) md->swapcache += nr_pages; - if (PageActive(page) || PageUnevictable(page)) + if (PageUnevictable(page) || page_is_active(compound_head(page), NULL)) md->active += nr_pages; if (PageWriteback(page)) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index 2d306cab36bc..a1a382418fc4 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -116,6 +116,49 @@ static inline int page_lru_gen(struct page *page) return ((READ_ONCE(page->flags) & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1; } +/* This function works regardless whether multigenerational lru is enabled. */ +static inline bool page_is_active(struct page *page, struct lruvec *lruvec) +{ + struct mem_cgroup *memcg; + int gen = page_lru_gen(page); + bool active = false; + + VM_BUG_ON_PAGE(PageTail(page), page); + + if (gen < 0) + return PageActive(page); + + if (lruvec) { + VM_BUG_ON_PAGE(PageUnevictable(page), page); + VM_BUG_ON_PAGE(PageActive(page), page); + lockdep_assert_held(&lruvec->lru_lock); + + return lru_gen_is_active(lruvec, gen); + } + + rcu_read_lock(); + + memcg = page_memcg_rcu(page); + lruvec = mem_cgroup_lruvec(memcg, page_pgdat(page)); + active = lru_gen_is_active(lruvec, gen); + + rcu_read_unlock(); + + return active; +} + +/* Activate a page from page cache or swap cache after it's mapped. */ +static inline void lru_gen_activate_page(struct page *page, struct vm_area_struct *vma) +{ + if (!lru_gen_enabled() || PageActive(page)) + return; + + if (vma->vm_flags & (VM_LOCKED | VM_SPECIAL | VM_HUGETLB)) + return; + + activate_page(page); +} + /* Update multigenerational lru sizes in addition to active/inactive lru sizes. */ static inline void lru_gen_update_size(struct page *page, struct lruvec *lruvec, int old_gen, int new_gen) @@ -252,6 +295,15 @@ static inline bool lru_gen_enabled(void) return false; } +static inline bool page_is_active(struct page *page, struct lruvec *lruvec) +{ + return PageActive(page); +} + +static inline void lru_gen_activate_page(struct page *page, struct vm_area_struct *vma) +{ +} + static inline bool page_set_lru_gen(struct page *page, struct lruvec *lruvec, bool front) { return false; diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 173083bb846e..99156602cd06 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -292,6 +292,7 @@ enum lruvec_flags { }; struct lruvec; +struct page_vma_mapped_walk; #define LRU_GEN_MASK ((BIT(LRU_GEN_WIDTH) - 1) << LRU_GEN_PGOFF) @@ -328,6 +329,7 @@ struct lru_gen { void lru_gen_init_lruvec(struct lruvec *lruvec); void lru_gen_set_state(bool enable, bool main, bool swap); +void lru_gen_scan_around(struct page_vma_mapped_walk *pvmw); #else /* CONFIG_LRU_GEN */ @@ -339,6 +341,10 @@ static inline void lru_gen_set_state(bool enable, bool main, bool swap) { } +static inline void lru_gen_scan_around(struct page_vma_mapped_walk *pvmw) +{ +} + #endif /* CONFIG_LRU_GEN */ struct lruvec { diff --git a/include/linux/swap.h b/include/linux/swap.h index de2bbbf181ba..0e7532c7db22 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -350,8 +350,8 @@ extern void deactivate_page(struct page *page); extern void mark_page_lazyfree(struct page *page); extern void swap_setup(void); -extern void lru_cache_add_inactive_or_unevictable(struct page *page, - struct vm_area_struct *vma); +extern void lru_cache_add_page_vma(struct page *page, struct vm_area_struct *vma, + bool faulting); /* linux/mm/vmscan.c */ extern unsigned long zone_reclaimable_pages(struct zone *zone); diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 6addc9780319..4e93e5602723 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -184,7 +184,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr, if (new_page) { get_page(new_page); page_add_new_anon_rmap(new_page, vma, addr, false); - lru_cache_add_inactive_or_unevictable(new_page, vma); + lru_cache_add_page_vma(new_page, vma, false); } else /* no new page, just dec_mm_counter for old_page */ dec_mm_counter(mm, MM_ANONPAGES); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index be9bf681313c..62e14da5264e 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -637,7 +637,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, entry = mk_huge_pmd(page, vma->vm_page_prot); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); page_add_new_anon_rmap(page, vma, haddr, true); - lru_cache_add_inactive_or_unevictable(page, vma); + lru_cache_add_page_vma(page, vma, true); pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable); set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry); update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); diff --git a/mm/khugepaged.c b/mm/khugepaged.c index a7d6cb912b05..08a43910f232 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1199,7 +1199,7 @@ static void collapse_huge_page(struct mm_struct *mm, spin_lock(pmd_ptl); BUG_ON(!pmd_none(*pmd)); page_add_new_anon_rmap(new_page, vma, address, true); - lru_cache_add_inactive_or_unevictable(new_page, vma); + lru_cache_add_page_vma(new_page, vma, true); pgtable_trans_huge_deposit(mm, pmd, pgtable); set_pmd_at(mm, address, pmd, _pmd); update_mmu_cache_pmd(vma, address, pmd); diff --git a/mm/memory.c b/mm/memory.c index c8e357627318..7188607bddb9 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -73,6 +73,7 @@ #include #include #include +#include #include @@ -845,7 +846,7 @@ copy_present_page(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma copy_user_highpage(new_page, page, addr, src_vma); __SetPageUptodate(new_page); page_add_new_anon_rmap(new_page, dst_vma, addr, false); - lru_cache_add_inactive_or_unevictable(new_page, dst_vma); + lru_cache_add_page_vma(new_page, dst_vma, false); rss[mm_counter(new_page)]++; /* All done, just insert the new page copy in the child */ @@ -2913,7 +2914,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) */ ptep_clear_flush_notify(vma, vmf->address, vmf->pte); page_add_new_anon_rmap(new_page, vma, vmf->address, false); - lru_cache_add_inactive_or_unevictable(new_page, vma); + lru_cache_add_page_vma(new_page, vma, true); /* * We call the notify macro here because, when using secondary * mmu page tables (such as kvm shadow page tables), we want the @@ -3436,9 +3437,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) /* ksm created a completely new copy */ if (unlikely(page != swapcache && swapcache)) { page_add_new_anon_rmap(page, vma, vmf->address, false); - lru_cache_add_inactive_or_unevictable(page, vma); + lru_cache_add_page_vma(page, vma, true); } else { do_page_add_anon_rmap(page, vma, vmf->address, exclusive); + lru_gen_activate_page(page, vma); } swap_free(entry); @@ -3582,7 +3584,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES); page_add_new_anon_rmap(page, vma, vmf->address, false); - lru_cache_add_inactive_or_unevictable(page, vma); + lru_cache_add_page_vma(page, vma, true); setpte: set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry); @@ -3707,6 +3709,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) add_mm_counter(vma->vm_mm, mm_counter_file(page), HPAGE_PMD_NR); page_add_file_rmap(page, true); + lru_gen_activate_page(page, vma); /* * deposit and withdraw with pmd lock held */ @@ -3750,10 +3753,11 @@ void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr) if (write && !(vma->vm_flags & VM_SHARED)) { inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES); page_add_new_anon_rmap(page, vma, addr, false); - lru_cache_add_inactive_or_unevictable(page, vma); + lru_cache_add_page_vma(page, vma, true); } else { inc_mm_counter_fast(vma->vm_mm, mm_counter_file(page)); page_add_file_rmap(page, false); + lru_gen_activate_page(page, vma); } set_pte_at(vma->vm_mm, addr, vmf->pte, entry); } diff --git a/mm/migrate.c b/mm/migrate.c index 62b81d5257aa..1064b03cac33 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -3004,7 +3004,7 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate, inc_mm_counter(mm, MM_ANONPAGES); page_add_new_anon_rmap(page, vma, addr, false); if (!is_zone_device_page(page)) - lru_cache_add_inactive_or_unevictable(page, vma); + lru_cache_add_page_vma(page, vma, false); get_page(page); if (flush) { diff --git a/mm/rmap.c b/mm/rmap.c index b0fc27e77d6d..a44f9ee74ee1 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -72,6 +72,7 @@ #include #include #include +#include #include @@ -792,6 +793,11 @@ static bool page_referenced_one(struct page *page, struct vm_area_struct *vma, } if (pvmw.pte) { + /* multigenerational lru exploits spatial locality */ + if (lru_gen_enabled() && pte_young(*pvmw.pte)) { + lru_gen_scan_around(&pvmw); + referenced++; + } if (ptep_clear_flush_young_notify(vma, address, pvmw.pte)) { /* diff --git a/mm/swap.c b/mm/swap.c index bd10efe00684..7aa85004b490 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -310,7 +310,7 @@ void lru_note_cost_page(struct page *page) static void __activate_page(struct page *page, struct lruvec *lruvec) { - if (!PageActive(page) && !PageUnevictable(page)) { + if (!PageUnevictable(page) && !page_is_active(page, lruvec)) { int nr_pages = thp_nr_pages(page); del_page_from_lru_list(page, lruvec); @@ -341,7 +341,7 @@ static bool need_activate_page_drain(int cpu) static void activate_page_on_lru(struct page *page) { page = compound_head(page); - if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) { + if (PageLRU(page) && !PageUnevictable(page) && !page_is_active(page, NULL)) { struct pagevec *pvec; local_lock(&lru_pvecs.lock); @@ -435,7 +435,7 @@ void mark_page_accessed(struct page *page) * this list is never rotated or maintained, so marking an * evictable page accessed has no effect. */ - } else if (!PageActive(page)) { + } else if (!page_is_active(page, NULL)) { activate_page(page); ClearPageReferenced(page); workingset_activation(page); @@ -471,15 +471,14 @@ void lru_cache_add(struct page *page) EXPORT_SYMBOL(lru_cache_add); /** - * lru_cache_add_inactive_or_unevictable + * lru_cache_add_page_vma * @page: the page to be added to LRU * @vma: vma in which page is mapped for determining reclaimability * - * Place @page on the inactive or unevictable LRU list, depending on its - * evictability. + * Place @page on an LRU list, depending on its evictability. */ -void lru_cache_add_inactive_or_unevictable(struct page *page, - struct vm_area_struct *vma) +void lru_cache_add_page_vma(struct page *page, struct vm_area_struct *vma, + bool faulting) { bool unevictable; @@ -496,6 +495,11 @@ void lru_cache_add_inactive_or_unevictable(struct page *page, __mod_zone_page_state(page_zone(page), NR_MLOCK, nr_pages); count_vm_events(UNEVICTABLE_PGMLOCKED, nr_pages); } + + /* multigenerational lru uses PageActive() to track page faults */ + if (lru_gen_enabled() && !unevictable && faulting) + SetPageActive(page); + lru_cache_add(page); } @@ -522,7 +526,7 @@ void lru_cache_add_inactive_or_unevictable(struct page *page, */ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec) { - bool active = PageActive(page); + bool active = page_is_active(page, lruvec); int nr_pages = thp_nr_pages(page); if (PageUnevictable(page)) @@ -562,7 +566,7 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec) static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec) { - if (PageActive(page) && !PageUnevictable(page)) { + if (!PageUnevictable(page) && page_is_active(page, lruvec)) { int nr_pages = thp_nr_pages(page); del_page_from_lru_list(page, lruvec); @@ -676,7 +680,7 @@ void deactivate_file_page(struct page *page) */ void deactivate_page(struct page *page) { - if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) { + if (PageLRU(page) && !PageUnevictable(page) && page_is_active(page, NULL)) { struct pagevec *pvec; local_lock(&lru_pvecs.lock); diff --git a/mm/swapfile.c b/mm/swapfile.c index fe03cfeaa08f..c0956b3bde03 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1936,7 +1936,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, page_add_anon_rmap(page, vma, addr, false); } else { /* ksm created a completely new copy */ page_add_new_anon_rmap(page, vma, addr, false); - lru_cache_add_inactive_or_unevictable(page, vma); + lru_cache_add_page_vma(page, vma, false); } swap_free(entry); out: diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 9a3d451402d7..e1d4cd3103b8 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -123,7 +123,7 @@ static int mcopy_atomic_pte(struct mm_struct *dst_mm, inc_mm_counter(dst_mm, MM_ANONPAGES); page_add_new_anon_rmap(page, dst_vma, dst_addr, false); - lru_cache_add_inactive_or_unevictable(page, dst_vma); + lru_cache_add_page_vma(page, dst_vma, true); set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); diff --git a/mm/vmscan.c b/mm/vmscan.c index fd49a9a5d7f5..ce868d89dc53 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1876,7 +1876,7 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, add_page_to_lru_list(page, lruvec); nr_pages = thp_nr_pages(page); nr_moved += nr_pages; - if (PageActive(page)) + if (page_is_active(page, lruvec)) workingset_age_nonresident(lruvec, nr_pages); } @@ -4688,6 +4688,57 @@ static int page_update_lru_gen(struct page *page, int new_gen) return old_gen; } +void lru_gen_scan_around(struct page_vma_mapped_walk *pvmw) +{ + pte_t *pte; + unsigned long start, end; + int old_gen, new_gen; + unsigned long flags; + struct lruvec *lruvec; + struct mem_cgroup *memcg; + struct pglist_data *pgdat = page_pgdat(pvmw->page); + + lockdep_assert_held(pvmw->ptl); + VM_BUG_ON_VMA(pvmw->address < pvmw->vma->vm_start, pvmw->vma); + + start = max(pvmw->address & PMD_MASK, pvmw->vma->vm_start); + end = pmd_addr_end(pvmw->address, pvmw->vma->vm_end); + pte = pvmw->pte - ((pvmw->address - start) >> PAGE_SHIFT); + + memcg = lock_page_memcg(pvmw->page); + lruvec = lock_page_lruvec_irqsave(pvmw->page, &flags); + + new_gen = lru_gen_from_seq(lruvec->evictable.max_seq); + + for (; start != end; pte++, start += PAGE_SIZE) { + struct page *page; + unsigned long pfn = pte_pfn(*pte); + + if (!pte_present(*pte) || !pte_young(*pte) || is_zero_pfn(pfn)) + continue; + + if (pfn < pgdat->node_start_pfn || pfn >= pgdat_end_pfn(pgdat)) + continue; + + page = compound_head(pte_page(*pte)); + if (page_to_nid(page) != pgdat->node_id) + continue; + if (page_memcg_rcu(page) != memcg) + continue; + /* + * We may be holding many locks. So try to finish as fast as + * possible and leave the accessed and the dirty bits to page + * table walk. + */ + old_gen = page_update_lru_gen(page, new_gen); + if (old_gen >= 0 && old_gen != new_gen) + lru_gen_update_size(page, lruvec, old_gen, new_gen); + } + + unlock_page_lruvec_irqrestore(lruvec, flags); + unlock_page_memcg(pvmw->page); +} + struct mm_walk_args { struct mem_cgroup *memcg; unsigned long max_seq; From patchwork Sat Mar 13 07:57:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 12136589 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6517AC433DB for ; Sat, 13 Mar 2021 07:58:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E68E56148E for ; Sat, 13 Mar 2021 07:58:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E68E56148E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 08A9B6B007E; Sat, 13 Mar 2021 02:58:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 013ED6B0080; Sat, 13 Mar 2021 02:58:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DA82D6B0081; Sat, 13 Mar 2021 02:58:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0037.hostedemail.com [216.40.44.37]) by kanga.kvack.org (Postfix) with ESMTP id B9F456B0080 for ; Sat, 13 Mar 2021 02:58:13 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 7C19C180142E0 for ; Sat, 13 Mar 2021 07:58:13 +0000 (UTC) X-FDA: 77914098066.11.0090526 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf18.hostedemail.com (Postfix) with ESMTP id 4C137200DB34 for ; Sat, 13 Mar 2021 07:58:14 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id f81so31797243yba.8 for ; Fri, 12 Mar 2021 23:58:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=HpM4Xhq08m+hN0lK+sEgEqrsIbbrPIT86c74gNI/u9w=; b=iZooLRgVq0QKt7uS3ecgWUo9C9cGTHyr/2U5sxKCS6QXN0IgrJO3PAgtZX95WUfwcs ZSAQR6gNrWG+wP3LmXgu+DSkMxuKFw0cNRjKklelEg68uDJJWokPALnGLPA4nMy8dmmL gNmywFSsTBT8s8Opdpx1NmeoX3AoH+b/8wRfOXpwgaGDC+vZ82D7uiEVDdLrnbyopU8f IpPsdEWG/PKjNmqr9aaxr+DYpbpcOwr48yDoUFRDhKXq2kRMLS3q9Cr2+5Aa3cOZI9gL a6NMw3O/GtXpPy+iDG3kH66GA3tICHTkX1Zm9PHqXzTR/dFawFQyNro80/QMStsyVm0P hUPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=HpM4Xhq08m+hN0lK+sEgEqrsIbbrPIT86c74gNI/u9w=; b=fCqVYXwdXm76Xp+Kty3Ki12SnphLhaJKEQ7UfPSXvO5u1+ddKNdSCZhfDA1VE1UCG/ s3W8+P2rKzh1vG191gZK0pMX/kk1bTdjYojbsrA/FQ0BXzYhaF/pVxPJAvu0/ba0yLdF 0vnUH9ta2e9TFxf64ppnn0rcPOBRnJ8nBL6uqGCNQsqBaobEvGNbV00adM/7kqg0CqVH 5VSTBpiCqvbbV5gnIvboI2e8zdqcRX+9fYtpfxIJ5yeQnOjpRmj7Xy1fZtWcUATPsB8/ raBksX0+nOhdo5QFHJ2CK/m1xbSAdy5DzfTn1y0z3qYL8urYGYllI0GpEl0HN2WJZdTy iQ0Q== X-Gm-Message-State: AOAM5336VZ7Bvh4OuCx7y10IbKtOn7BzHpE+in8EWpjp6SpQG8rEqBNg SrZlzZSGJzZNI1KVNPugwCP/QG64z8phVCkX6xX59hvfp5urOTuUz7CvEqa1SilzZ0WczXT6c2p dKu798c8Uk1D7tL7tUoTKyIRKpIQGMknLsKFv2LtAe/AuxpicSvM8DZcX X-Google-Smtp-Source: ABdhPJyqRrRSS1bJd41QeWsraiQMIBjlJledYF/KyYHH/bW9K+768sYy2pOiqW2DerYXRlOndOy2egSD7FE= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:f931:d3e4:faa0:4f74]) (user=yuzhao job=sendgmr) by 2002:a25:1184:: with SMTP id 126mr22571060ybr.430.1615622292183; Fri, 12 Mar 2021 23:58:12 -0800 (PST) Date: Sat, 13 Mar 2021 00:57:45 -0700 In-Reply-To: <20210313075747.3781593-1-yuzhao@google.com> Message-Id: <20210313075747.3781593-13-yuzhao@google.com> Mime-Version: 1.0 References: <20210313075747.3781593-1-yuzhao@google.com> X-Mailer: git-send-email 2.31.0.rc2.261.g7f71774620-goog Subject: [PATCH v1 12/14] mm: multigenerational lru: user space interface From: Yu Zhao To: linux-mm@kvack.org Cc: Alex Shi , Andrew Morton , Dave Hansen , Hillf Danton , Johannes Weiner , Joonsoo Kim , Matthew Wilcox , Mel Gorman , Michal Hocko , Roman Gushchin , Vlastimil Babka , Wei Yang , Yang Shi , Ying Huang , linux-kernel@vger.kernel.org, page-reclaim@google.com, Yu Zhao X-Stat-Signature: 1w1hggk5egy5p6jn7m8b4meiumpgqk41 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 4C137200DB34 Received-SPF: none (flex--yuzhao.bounces.google.com>: No applicable sender policy available) receiver=imf18; identity=mailfrom; envelope-from="<3lHBMYAYKCK8njoWPdVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--yuzhao.bounces.google.com>"; helo=mail-yb1-f202.google.com; client-ip=209.85.219.202 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615622294-791224 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a sysfs file /sys/kernel/mm/lru_gen/enabled so user space can enable and disable multigenerational lru at runtime. Add a sysfs file /sys/kernel/mm/lru_gen/spread so user space can spread pages out across multiple generations. More generations make the background aging more aggressive. Add a debugfs file /sys/kernel/debug/lru_gen so user space can monitor multigenerational lru and trigger the aging and the eviction. This file has the following output: memcg memcg_id memcg_path node node_id min_gen birth_time anon_size file_size ... max_gen birth_time anon_size file_size Given a memcg and a node, "min_gen" is the oldest generation (number) and "max_gen" is the youngest. Birth time is in milliseconds. Anon and file sizes are in pages. Write "+ memcg_id node_id gen [swappiness]" to this file to account referenced pages to generation "max_gen" and create next generation "max_gen"+1. "gen" must be equal to "max_gen" in order to avoid races. A swap file and a non-zero swappiness value are required to scan anon pages. If swapping is not desired, set vm.swappiness to 0 and overwrite it with a non-zero "swappiness". Write "- memcg_id node_id gen [swappiness] [nr_to_reclaim]" to this file to evict generations less than or equal to "gen". "gen" must be less than "max_gen"-1 as "max_gen" and "max_gen"-1 are active generations and therefore protected from the eviction. "nr_to_reclaim" can be used to limit the number of pages to be evicted. Multiple command lines are supported, so does concatenation with delimiters "," and ";". Signed-off-by: Yu Zhao Reported-by: kernel test robot --- mm/vmscan.c | 334 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 334 insertions(+) diff --git a/mm/vmscan.c b/mm/vmscan.c index ce868d89dc53..b59b556e9587 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -51,6 +51,7 @@ #include #include #include +#include #include #include @@ -5833,6 +5834,334 @@ lru_gen_online_mem(struct notifier_block *self, unsigned long action, void *arg) return NOTIFY_DONE; } +/****************************************************************************** + * sysfs interface + ******************************************************************************/ + +static ssize_t show_lru_gen_spread(struct kobject *kobj, struct kobj_attribute *attr, + char *buf) +{ + return sprintf(buf, "%d\n", READ_ONCE(lru_gen_spread)); +} + +static ssize_t store_lru_gen_spread(struct kobject *kobj, struct kobj_attribute *attr, + const char *buf, size_t len) +{ + int spread; + + if (kstrtoint(buf, 10, &spread) || spread >= MAX_NR_GENS) + return -EINVAL; + + WRITE_ONCE(lru_gen_spread, spread); + + return len; +} + +static struct kobj_attribute lru_gen_spread_attr = __ATTR( + spread, 0644, + show_lru_gen_spread, store_lru_gen_spread +); + +static ssize_t show_lru_gen_enabled(struct kobject *kobj, struct kobj_attribute *attr, + char *buf) +{ + return snprintf(buf, PAGE_SIZE, "%ld\n", lru_gen_enabled()); +} + +static ssize_t store_lru_gen_enabled(struct kobject *kobj, struct kobj_attribute *attr, + const char *buf, size_t len) +{ + int enable; + + if (kstrtoint(buf, 10, &enable)) + return -EINVAL; + + lru_gen_set_state(enable, true, false); + + return len; +} + +static struct kobj_attribute lru_gen_enabled_attr = __ATTR( + enabled, 0644, show_lru_gen_enabled, store_lru_gen_enabled +); + +static struct attribute *lru_gen_attrs[] = { + &lru_gen_spread_attr.attr, + &lru_gen_enabled_attr.attr, + NULL +}; + +static struct attribute_group lru_gen_attr_group = { + .name = "lru_gen", + .attrs = lru_gen_attrs, +}; + +/****************************************************************************** + * debugfs interface + ******************************************************************************/ + +static void *lru_gen_seq_start(struct seq_file *m, loff_t *pos) +{ + struct mem_cgroup *memcg; + loff_t nr_to_skip = *pos; + + m->private = kzalloc(PATH_MAX, GFP_KERNEL); + if (!m->private) + return ERR_PTR(-ENOMEM); + + memcg = mem_cgroup_iter(NULL, NULL, NULL); + do { + int nid; + + for_each_node_state(nid, N_MEMORY) { + if (!nr_to_skip--) + return mem_cgroup_lruvec(memcg, NODE_DATA(nid)); + } + } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL))); + + return NULL; +} + +static void lru_gen_seq_stop(struct seq_file *m, void *v) +{ + if (!IS_ERR_OR_NULL(v)) + mem_cgroup_iter_break(NULL, lruvec_memcg(v)); + + kfree(m->private); + m->private = NULL; +} + +static void *lru_gen_seq_next(struct seq_file *m, void *v, loff_t *pos) +{ + int nid = lruvec_pgdat(v)->node_id; + struct mem_cgroup *memcg = lruvec_memcg(v); + + ++*pos; + + nid = next_memory_node(nid); + if (nid == MAX_NUMNODES) { + memcg = mem_cgroup_iter(NULL, memcg, NULL); + if (!memcg) + return NULL; + + nid = first_memory_node; + } + + return mem_cgroup_lruvec(memcg, NODE_DATA(nid)); +} + +static int lru_gen_seq_show(struct seq_file *m, void *v) +{ + unsigned long seq; + struct lruvec *lruvec = v; + int nid = lruvec_pgdat(lruvec)->node_id; + struct mem_cgroup *memcg = lruvec_memcg(lruvec); + DEFINE_MAX_SEQ(lruvec); + DEFINE_MIN_SEQ(lruvec); + + if (nid == first_memory_node) { +#ifdef CONFIG_MEMCG + if (memcg) + cgroup_path(memcg->css.cgroup, m->private, PATH_MAX); +#endif + seq_printf(m, "memcg %5hu %s\n", + mem_cgroup_id(memcg), (char *)m->private); + } + + seq_printf(m, " node %4d\n", nid); + + for (seq = min(min_seq[0], min_seq[1]); seq <= max_seq; seq++) { + int gen, file, zone; + unsigned int msecs; + long sizes[ANON_AND_FILE] = {}; + + gen = lru_gen_from_seq(seq); + + msecs = jiffies_to_msecs(jiffies - READ_ONCE( + lruvec->evictable.timestamps[gen])); + + for_each_type_zone(file, zone) + sizes[file] += READ_ONCE( + lruvec->evictable.sizes[gen][file][zone]); + + sizes[0] = max(sizes[0], 0L); + sizes[1] = max(sizes[1], 0L); + + seq_printf(m, "%11lu %9u %9lu %9lu\n", + seq, msecs, sizes[0], sizes[1]); + } + + return 0; +} + +static const struct seq_operations lru_gen_seq_ops = { + .start = lru_gen_seq_start, + .stop = lru_gen_seq_stop, + .next = lru_gen_seq_next, + .show = lru_gen_seq_show, +}; + +static int lru_gen_debugfs_open(struct inode *inode, struct file *file) +{ + return seq_open(file, &lru_gen_seq_ops); +} + +static int advance_max_seq(struct lruvec *lruvec, unsigned long seq, int swappiness) +{ + struct scan_control sc = { + .target_mem_cgroup = lruvec_memcg(lruvec), + }; + DEFINE_MAX_SEQ(lruvec); + + if (seq == max_seq) + walk_mm_list(lruvec, max_seq, &sc, swappiness); + + return seq > max_seq ? -EINVAL : 0; +} + +static int advance_min_seq(struct lruvec *lruvec, unsigned long seq, int swappiness, + unsigned long nr_to_reclaim) +{ + struct blk_plug plug; + int err = -EINTR; + long nr_to_scan = LONG_MAX; + struct scan_control sc = { + .nr_to_reclaim = nr_to_reclaim, + .target_mem_cgroup = lruvec_memcg(lruvec), + .may_writepage = 1, + .may_unmap = 1, + .may_swap = 1, + .reclaim_idx = MAX_NR_ZONES - 1, + .gfp_mask = GFP_KERNEL, + }; + DEFINE_MAX_SEQ(lruvec); + + if (seq >= max_seq - 1) + return -EINVAL; + + blk_start_plug(&plug); + + while (!signal_pending(current)) { + DEFINE_MIN_SEQ(lruvec); + + if (seq < min(min_seq[!swappiness], min_seq[swappiness < 200]) || + !evict_lru_gen_pages(lruvec, &sc, swappiness, &nr_to_scan)) { + err = 0; + break; + } + + cond_resched(); + } + + blk_finish_plug(&plug); + + return err; +} + +static int advance_seq(char cmd, int memcg_id, int nid, unsigned long seq, + int swappiness, unsigned long nr_to_reclaim) +{ + struct lruvec *lruvec; + int err = -EINVAL; + struct mem_cgroup *memcg = NULL; + + if (!mem_cgroup_disabled()) { + rcu_read_lock(); + memcg = mem_cgroup_from_id(memcg_id); +#ifdef CONFIG_MEMCG + if (memcg && !css_tryget(&memcg->css)) + memcg = NULL; +#endif + rcu_read_unlock(); + + if (!memcg) + goto done; + } + if (memcg_id != mem_cgroup_id(memcg)) + goto done; + + if (nid < 0 || nid >= MAX_NUMNODES || !node_state(nid, N_MEMORY)) + goto done; + + lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(nid)); + + if (swappiness == -1) + swappiness = get_swappiness(lruvec); + else if (swappiness > 200U) + goto done; + + switch (cmd) { + case '+': + err = advance_max_seq(lruvec, seq, swappiness); + break; + case '-': + err = advance_min_seq(lruvec, seq, swappiness, nr_to_reclaim); + break; + } +done: + mem_cgroup_put(memcg); + + return err; +} + +static ssize_t lru_gen_debugfs_write(struct file *file, const char __user *src, + size_t len, loff_t *pos) +{ + void *buf; + char *cur, *next; + int err = 0; + + buf = kvmalloc(len + 1, GFP_USER); + if (!buf) + return -ENOMEM; + + if (copy_from_user(buf, src, len)) { + kvfree(buf); + return -EFAULT; + } + + next = buf; + next[len] = '\0'; + + while ((cur = strsep(&next, ",;\n"))) { + int n; + int end; + char cmd; + int memcg_id; + int nid; + unsigned long seq; + int swappiness = -1; + unsigned long nr_to_reclaim = -1; + + cur = skip_spaces(cur); + if (!*cur) + continue; + + n = sscanf(cur, "%c %u %u %lu %n %u %n %lu %n", &cmd, &memcg_id, &nid, + &seq, &end, &swappiness, &end, &nr_to_reclaim, &end); + if (n < 4 || cur[end]) { + err = -EINVAL; + break; + } + + err = advance_seq(cmd, memcg_id, nid, seq, swappiness, nr_to_reclaim); + if (err) + break; + } + + kvfree(buf); + + return err ? : len; +} + +static const struct file_operations lru_gen_debugfs_ops = { + .open = lru_gen_debugfs_open, + .read = seq_read, + .write = lru_gen_debugfs_write, + .llseek = seq_lseek, + .release = seq_release, +}; + /****************************************************************************** * initialization ******************************************************************************/ @@ -5873,6 +6202,11 @@ static int __init init_lru_gen(void) if (hotplug_memory_notifier(lru_gen_online_mem, 0)) pr_err("lru_gen: failed to subscribe hotplug notifications\n"); + if (sysfs_create_group(mm_kobj, &lru_gen_attr_group)) + pr_err("lru_gen: failed to create sysfs group\n"); + + debugfs_create_file("lru_gen", 0644, NULL, NULL, &lru_gen_debugfs_ops); + return 0; }; /* From patchwork Sat Mar 13 07:57:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 12136593 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8511EC433E6 for ; Sat, 13 Mar 2021 07:58:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2609564F1E for ; Sat, 13 Mar 2021 07:58:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2609564F1E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7B5308D0002; Sat, 13 Mar 2021 02:58:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 767B58D0001; Sat, 13 Mar 2021 02:58:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5E0BF8D0002; Sat, 13 Mar 2021 02:58:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0214.hostedemail.com [216.40.44.214]) by kanga.kvack.org (Postfix) with ESMTP id 40E038D0001 for ; Sat, 13 Mar 2021 02:58:15 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 0EC22363D for ; Sat, 13 Mar 2021 07:58:15 +0000 (UTC) X-FDA: 77914098150.05.49719CD Received: from mail-qk1-f202.google.com (mail-qk1-f202.google.com [209.85.222.202]) by imf29.hostedemail.com (Postfix) with ESMTP id DD67FC770 for ; Sat, 13 Mar 2021 07:58:10 +0000 (UTC) Received: by mail-qk1-f202.google.com with SMTP id h126so19890310qkd.4 for ; Fri, 12 Mar 2021 23:58:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=uE7IkUDVFpC8s1yHDZEBmC5mjUj6EJwbkUDWk669ASY=; b=uXhx05PdjRvuFkUcjNWapddDUcA7Q5UwyJE/THSSQbBgSLQkmn7ajsUcb7ZmUjhauR /jgBj7/Odh6Ngd12GetPXsZawVQtDY3F/Xog0R3yIye6citJOlL5TJ+2wwf2gcYmueuQ WYDJiJoxC2qvezUikRORLJGGQFEDfwGtR7lOiuTnagaswHyGlY8OAHiBnbM2NFrUlS9F wuwz3bRewUhC6hOmirv0YN+eR4e/S0TxFsdjMMf1mOQtK0M77IA18i0YjomUSsNcZo01 GPjHMp59Yr0/XsqaDXOK5S+CA6611MojOlFbWTfafAQpXFIKwJXgkzUeIs9A5goZIWHY WCzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=uE7IkUDVFpC8s1yHDZEBmC5mjUj6EJwbkUDWk669ASY=; b=kRs1/KB4WSTDTK9wdOaP9AqkrzjWMvAKc2R2WgHKsnT/M+fIs95xiJIDdBoPBFTYqY NGTd72yc8WEpiOr2l5qH1rvG53udHPI95o1dcM/FQJ7s7tIai0wJ5xNuTA49Xl/UFg1e AAG5NpC4/bYg+VT0hQ3p9VmoULqIb2m5/xraJqhq2ZVZe4Rkqv/MMq+jtdB9C4F9bYnZ hMZCp72Xl2m28ZVZOZLNozQAxfxgB/2zXk/UWQOgZZyShIrQ5LcNUbbYM215NA1avplB s2abj4Ufi0ak1kWP5lIwy2UdfyHTM7QkYJ6j6OCj/W13FE/bHnZJ8eRRqJMJVJ2aixYA kWJQ== X-Gm-Message-State: AOAM532LK8JzEqaYuA2ptH8586KGzc0rMmX61bPeGjcA1u0Y8A6esini R3OsOr90mAoQB69ISTcNnw2Ls+/Nw/8+gWre9ITRyaXsmJUQOQ7Xu70Kf2GMyIlW2exmwXxMFby q6fMDyWrbntgAAsCo8toT33jJrC94hD8IDNzN2hVOgWzlHDzbCo9x0d0g X-Google-Smtp-Source: ABdhPJz7oVSCBhnoApkNm1wW85JMl4kSXploQGL6Mdvavu7deAd9Mg85kMswIg+jCkfg3Z/h90N99XOQmFk= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:f931:d3e4:faa0:4f74]) (user=yuzhao job=sendgmr) by 2002:a05:6214:1909:: with SMTP id er9mr1749542qvb.5.1615622293640; Fri, 12 Mar 2021 23:58:13 -0800 (PST) Date: Sat, 13 Mar 2021 00:57:46 -0700 In-Reply-To: <20210313075747.3781593-1-yuzhao@google.com> Message-Id: <20210313075747.3781593-14-yuzhao@google.com> Mime-Version: 1.0 References: <20210313075747.3781593-1-yuzhao@google.com> X-Mailer: git-send-email 2.31.0.rc2.261.g7f71774620-goog Subject: [PATCH v1 13/14] mm: multigenerational lru: Kconfig From: Yu Zhao To: linux-mm@kvack.org Cc: Alex Shi , Andrew Morton , Dave Hansen , Hillf Danton , Johannes Weiner , Joonsoo Kim , Matthew Wilcox , Mel Gorman , Michal Hocko , Roman Gushchin , Vlastimil Babka , Wei Yang , Yang Shi , Ying Huang , linux-kernel@vger.kernel.org, page-reclaim@google.com, Yu Zhao X-Stat-Signature: 95awzqiybt5oazooc5a1sihyy96yjwwd X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: DD67FC770 Received-SPF: none (flex--yuzhao.bounces.google.com>: No applicable sender policy available) receiver=imf29; identity=mailfrom; envelope-from="<3lXBMYAYKCLAokpXQeWeeWbU.SecbYdkn-ccalQSa.ehW@flex--yuzhao.bounces.google.com>"; helo=mail-qk1-f202.google.com; client-ip=209.85.222.202 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615622290-73741 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add configuration options for multigenerational lru. Signed-off-by: Yu Zhao Reported-by: kernel test robot Reported-by: kernel test robot --- mm/Kconfig | 29 +++++++++++++++++++++++++++++ 1 file changed, 29 insertions(+) diff --git a/mm/Kconfig b/mm/Kconfig index 24c045b24b95..3a5bcc2d7a45 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -872,4 +872,33 @@ config MAPPING_DIRTY_HELPERS config KMAP_LOCAL bool +config LRU_GEN + bool "Multigenerational LRU" + depends on MMU + help + High performance multigenerational LRU to heavily overcommit workloads + that are not IO bound. See Documentation/vm/multigen_lru.rst for + details. + + Warning: do not enable this option unless you plan to use it because + it introduces a small per-process memory overhead. + +config NR_LRU_GENS + int "Max number of generations" + depends on LRU_GEN + range 4 63 + default 7 + help + This will use ilog2(N)+1 spare bits from page flags. + + Warning: do not use numbers larger than necessary because each + generation introduces a small per-node and per-memcg memory overhead. + +config LRU_GEN_ENABLED + bool "Turn on by default" + depends on LRU_GEN + help + The default value of /sys/kernel/mm/lru_gen/enabled is 0. This option + changes it to 1. + endmenu From patchwork Sat Mar 13 07:57:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 12136595 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35593C433DB for ; Sat, 13 Mar 2021 07:58:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CBCF66148E for ; Sat, 13 Mar 2021 07:58:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CBCF66148E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 20FDC8D0003; Sat, 13 Mar 2021 02:58:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1C3958D0001; Sat, 13 Mar 2021 02:58:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 03DDE8D0003; Sat, 13 Mar 2021 02:58:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0217.hostedemail.com [216.40.44.217]) by kanga.kvack.org (Postfix) with ESMTP id D8ED98D0001 for ; Sat, 13 Mar 2021 02:58:16 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 9843F18019B09 for ; Sat, 13 Mar 2021 07:58:16 +0000 (UTC) X-FDA: 77914098192.28.095C884 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf20.hostedemail.com (Postfix) with ESMTP id 920C62F71 for ; Sat, 13 Mar 2021 07:58:11 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id g17so32025816ybh.4 for ; Fri, 12 Mar 2021 23:58:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=b33M625vmeFm8iJFKYIy9IbS+yyXzJHrz2YlprWAE88=; b=qyBWCu6iSCz/+GOTBSyjEGx0UNh3wx8I4EpB+DGhW3FtbTsYmoVsgkJK7K9lMib92D 8UESs064HgmPaCcFC9wummpEDT04EZB57UgnWkSzwsmT8q8yKbsLNsdnaqxDho13rxSL l1lhvY8XggaGyQS76caURzCZzmuuIb31yoMyJa36cSNEQIIGzS/Qm0HS9FQ4Sslqjhio 7G+7M9RsfMDtCuFijNWCkO0VasJ5hLLwIPnUW2My7qRxlwAQoGToYUEA5ipkn9Ckz+I1 ZZZL32LyYugyxj8DFHGhkOK2vtm0J8rqvkbb7eJOL7RwHttQzqGHotvqWMTx+tw95ZXr O2hQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=b33M625vmeFm8iJFKYIy9IbS+yyXzJHrz2YlprWAE88=; b=TuNpRZoIEvZCTECl86ejZwuEExwO7ZgzN3zgstDx0bIaqN3V2qqENYGPG3mspIpqQU Fem4fjI6nddDUpgjBJeg7skFpTqYXpq2PJC95Bc5tFLSAJWk81eO3r486DWWCKAk2+iD zmAevZK8fEBkzrWOCJCakP8UOr7IULVRoV3oPl+tEqyH4RoQu6hBzOsBMQEYqqisWkeU KrLDADzSDDJeX53yfWS3aLplyXj9vqQY+j1U9ZmsJAlr1Dv/8zHvw+yYJ/hPJjpyG35a Jvf3xtD9EqkV619nmfk6A3C6TCHfAmWfOUYnwInuFW2ocK1+f84g7B8knWIlRq8wvk8s FREQ== X-Gm-Message-State: AOAM533YGkfHXsl51QojVycHxHj7wOp19AgdmIzQ1NlSaJNGlZqO2u56 9Jx+KXBNqTaNapByr3r3Jbk8zSX6L3mz9WkurSA1Xt+v5MHuUb3qoj9KR4pqUChd1bS1xX8Gpy/ uJ+WPboR8e8cNayzPysqbdnumkNmXOSvEp3EBDvIQIvBk/E+dwpCThPi6 X-Google-Smtp-Source: ABdhPJwyYS+6AjG1nAXtEYBKUiLdJ5mAmkUQWOq9ngRJz7So3XTjiRhhO4QmZvWvkKbzXQ7oleb6ep4AFYs= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:f931:d3e4:faa0:4f74]) (user=yuzhao job=sendgmr) by 2002:a25:8003:: with SMTP id m3mr24313264ybk.452.1615622295044; Fri, 12 Mar 2021 23:58:15 -0800 (PST) Date: Sat, 13 Mar 2021 00:57:47 -0700 In-Reply-To: <20210313075747.3781593-1-yuzhao@google.com> Message-Id: <20210313075747.3781593-15-yuzhao@google.com> Mime-Version: 1.0 References: <20210313075747.3781593-1-yuzhao@google.com> X-Mailer: git-send-email 2.31.0.rc2.261.g7f71774620-goog Subject: [PATCH v1 14/14] mm: multigenerational lru: documentation From: Yu Zhao To: linux-mm@kvack.org Cc: Alex Shi , Andrew Morton , Dave Hansen , Hillf Danton , Johannes Weiner , Joonsoo Kim , Matthew Wilcox , Mel Gorman , Michal Hocko , Roman Gushchin , Vlastimil Babka , Wei Yang , Yang Shi , Ying Huang , linux-kernel@vger.kernel.org, page-reclaim@google.com, Yu Zhao X-Stat-Signature: tk7kbkjgfbrrjteoorp3m9rexfeq3tap X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 920C62F71 Received-SPF: none (flex--yuzhao.bounces.google.com>: No applicable sender policy available) receiver=imf20; identity=mailfrom; envelope-from="<3l3BMYAYKCLIqmrZSgYggYdW.Ugedafmp-eecnSUc.gjY@flex--yuzhao.bounces.google.com>"; helo=mail-yb1-f202.google.com; client-ip=209.85.219.202 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615622291-182908 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add Documentation/vm/multigen_lru.rst. Signed-off-by: Yu Zhao --- Documentation/vm/index.rst | 1 + Documentation/vm/multigen_lru.rst | 210 ++++++++++++++++++++++++++++++ 2 files changed, 211 insertions(+) create mode 100644 Documentation/vm/multigen_lru.rst diff --git a/Documentation/vm/index.rst b/Documentation/vm/index.rst index eff5fbd492d0..c353b3f55924 100644 --- a/Documentation/vm/index.rst +++ b/Documentation/vm/index.rst @@ -17,6 +17,7 @@ various features of the Linux memory management swap_numa zswap + multigen_lru Kernel developers MM documentation ================================== diff --git a/Documentation/vm/multigen_lru.rst b/Documentation/vm/multigen_lru.rst new file mode 100644 index 000000000000..fea927da2572 --- /dev/null +++ b/Documentation/vm/multigen_lru.rst @@ -0,0 +1,210 @@ +===================== +Multigenerational LRU +===================== + +Quick Start +=========== +Build Options +------------- +:Required: Set ``CONFIG_LRU_GEN=y``. + +:Optional: Change ``CONFIG_NR_LRU_GENS`` to a number ``X`` to support + a maximum of ``X`` generations. + +:Optional: Set ``CONFIG_LRU_GEN_ENABLED=y`` to turn the feature on by + default. + +Runtime Options +--------------- +:Required: Write ``1`` to ``/sys/kernel/mm/lru_gen/enable`` if the + feature was not turned on by default. + +:Optional: Change ``/sys/kernel/mm/lru_gen/spread`` to a number ``N`` + to spread pages out across ``N+1`` generations. ``N`` must be less + than ``X``. Larger values make the background aging more aggressive. + +:Optional: Read ``/sys/kernel/debug/lru_gen`` to verify the feature. + This file has the following output: + +:: + + memcg memcg_id memcg_path + node node_id + min_gen birth_time anon_size file_size + ... + max_gen birth_time anon_size file_size + +Given a memcg and a node, ``min_gen`` is the oldest generation +(number) and ``max_gen`` is the youngest. Birth time is in +milliseconds. Anon and file sizes are in pages. + +Recipes +------- +:Android on ARMv8.1+: ``X=4``, ``N=0`` + +:Android on pre-ARMv8.1 CPUs: Not recommended due to the lack of + ``ARM64_HW_AFDBM`` + +:Laptops running Chrome on x86_64: ``X=7``, ``N=2`` + +:Working set estimation: Write ``+ memcg_id node_id gen [swappiness]`` + to ``/sys/kernel/debug/lru_gen`` to account referenced pages to + generation ``max_gen`` and create the next generation ``max_gen+1``. + ``gen`` must be equal to ``max_gen`` in order to avoid races. A swap + file and a non-zero swappiness value are required to scan anon pages. + If swapping is not desired, set ``vm.swappiness`` to ``0`` and + overwrite it with a non-zero ``swappiness``. + +:Proactive reclaim: Write ``- memcg_id node_id gen [swappiness] + [nr_to_reclaim]`` to ``/sys/kernel/debug/lru_gen`` to evict + generations less than or equal to ``gen``. ``gen`` must be less than + ``max_gen-1`` as ``max_gen`` and ``max_gen-1`` are active generations + and therefore protected from the eviction. ``nr_to_reclaim`` can be + used to limit the number of pages to be evicted. Multiple command + lines are supported, so does concatenation with delimiters ``,`` and + ``;``. + +Workflow +======== +Evictable pages are divided into multiple generations for each +``lruvec``. The youngest generation number is stored in ``max_seq`` +for both anon and file types as they are aged on an equal footing. The +oldest generation numbers are stored in ``min_seq[2]`` separately for +anon and file types as clean file pages can be evicted regardless of +swap and write-back constraints. Generation numbers are truncated into +``ilog2(CONFIG_NR_LRU_GENS)+1`` bits in order to fit into +``page->flags``. The sliding window technique is used to prevent +truncated generation numbers from overlapping. Each truncated +generation number is an index to an array of per-type and per-zone +lists. Evictable pages are added to the per-zone lists indexed by +``max_seq`` or ``min_seq[2]`` (modulo ``CONFIG_NR_LRU_GENS``), +depending on whether they are being faulted in or read ahead. The +workflow comprises two conceptually independent functions: the aging +and the eviction. + +Aging +----- +The aging produces young generations. Given an ``lruvec``, the aging +scans page tables for referenced pages of this ``lruvec``. Upon +finding one, the aging updates its generation number to ``max_seq``. +After each round of scan, the aging increments ``max_seq``. The aging +maintains either a system-wide ``mm_struct`` list or per-memcg +``mm_struct`` lists, and it only scans page tables of processes that +have been scheduled since the last scan. Since scans are differential +with respect to referenced pages, the cost is roughly proportional to +their number. + +Eviction +-------- +The eviction consumes old generations. Given an ``lruvec``, the +eviction scans the pages on the per-zone lists indexed by either of +``min_seq[2]``. It selects a type according to the values of +``min_seq[2]`` and swappiness. During a scan, the eviction either +sorts or isolates a page, depending on whether the aging has updated +its generation number. When it finds all the per-zone lists are empty, +the eviction increments ``min_seq[2]`` indexed by this selected type. +The eviction triggers the aging when both of ``min_seq[2]`` reaches +``max_seq-1``, assuming both anon and file types are reclaimable. + +Rationale +========= +Characteristics of cloud workloads +---------------------------------- +With cloud storage gone mainstream, the role of local storage has +diminished. For most of the systems running cloud workloads, anon +pages account for the majority of memory consumption and page cache +contains mostly executable pages. Notably, the portion of the unmapped +is negligible. + +As a result, swapping is necessary to achieve substantial memory +overcommit. And the ``rmap`` is the hottest in the reclaim path +because its usage is proportional to the number of scanned pages, +which on average is many times the number of reclaimed pages. + +With ``zram``, a typical ``kswapd`` profile on v5.11 looks like: + +:: + + 31.03% page_vma_mapped_walk + 25.59% lzo1x_1_do_compress + 4.63% do_raw_spin_lock + 3.89% vma_interval_tree_iter_next + 3.33% vma_interval_tree_subtree_search + +And with real swap, it looks like: + +:: + + 45.16% page_vma_mapped_walk + 7.61% do_raw_spin_lock + 5.69% vma_interval_tree_iter_next + 4.91% vma_interval_tree_subtree_search + 3.71% page_referenced_one + +Limitations of the Current Implementation +----------------------------------------- +Notion of the Active/Inactive +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +For servers equipped with hundreds of gigabytes of memory, the +granularity of the active/inactive is too coarse to be useful for job +scheduling. And false active/inactive rates are relatively high. + +For phones and laptops, the eviction is biased toward file pages +because the selection has to resort to heuristics as direct +comparisons between anon and file types are infeasible. + +For systems with multiple nodes and/or memcgs, it is impossible to +compare ``lruvec``\s based on the notion of the active/inactive. + +Incremental Scans via the ``rmap`` +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Each incremental scan picks up at where the last scan left off and +stops after it has found a handful of unreferenced pages. For most of +the systems running cloud workloads, incremental scans lose the +advantage under sustained memory pressure due to high ratios of the +number of scanned pages to the number of reclaimed pages. On top of +that, the ``rmap`` has poor memory locality due to its complex data +structures. The combined effects typically result in a high amount of +CPU usage in the reclaim path. + +Benefits of the Multigenerational LRU +------------------------------------- +Notion of Generation Numbers +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +The notion of generation numbers introduces a quantitative approach to +memory overcommit. A larger number of pages can be spread out across +configurable generations, and thus they have relatively low false +active/inactive rates. Each generation includes all pages that have +been referenced since the last generation. + +Given an ``lruvec``, scans and the selections between anon and file +types are all based on generation numbers, which are simple and yet +effective. For different ``lruvec``\s, comparisons are still possible +based on birth times of generations. + +Differential Scans via Page Tables +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Each differential scan discovers all pages that have been referenced +since the last scan. Specifically, it walks the ``mm_struct`` list +associated with an ``lruvec`` to scan page tables of processes that +have been scheduled since the last scan. The cost of each differential +scan is roughly proportional to the number of referenced pages it +discovers. Unless address spaces are extremely sparse, page tables +usually have better memory locality than the ``rmap``. The end result +is generally a significant reduction in CPU usage, for most of the +systems running cloud workloads. + +To-do List +========== +KVM Optimization +---------------- +Support shadow page table walk. + +NUMA Optimization +----------------- +Add per-node RSS for ``should_skip_mm()``. + +Refault Tracking Optimization +----------------------------- +Use generation numbers rather than LRU positions in +``workingset_eviction()`` and ``workingset_refault()``.