From patchwork Fri Jun 28 15:24:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shakeel Butt X-Patchwork-Id: 11022655 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1F5961398 for ; Fri, 28 Jun 2019 15:24:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1079D28797 for ; Fri, 28 Jun 2019 15:24:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 04795287A4; Fri, 28 Jun 2019 15:24:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2D21928797 for ; Fri, 28 Jun 2019 15:24:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DAE658E0003; Fri, 28 Jun 2019 11:24:51 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D5D9D8E0002; Fri, 28 Jun 2019 11:24:51 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C4CE18E0003; Fri, 28 Jun 2019 11:24:51 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) by kanga.kvack.org (Postfix) with ESMTP id 8C8BF8E0002 for ; Fri, 28 Jun 2019 11:24:51 -0400 (EDT) Received: by mail-pf1-f199.google.com with SMTP id y7so4097203pfy.9 for ; Fri, 28 Jun 2019 08:24:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:date:message-id:mime-version :subject:from:to:cc; bh=ZsiRx6xvzXROMUfoU65xDsRvBFu/S3BDc/iiU2HXWv4=; b=mWswfTI1CKAOeOAHgvM+PicG+oGXZIiHgYhYb1fUQVjly4I/6E+7T4Mkv1XrpivgQf 53FilWh2PQ3DmAry8oGW5Zzpo7rCTF/NlmB5YJ9EXXi1V4/IQ5Q5FZ26BWhAhKbH9P51 3/gBI0pg0EOxPVFw2Mz4ZbVcg4VI3jfYwq9xyeIi+ZRVSDBeUxpo7HXLPp/wwDQpX5vl shB99lXoGmK+STezCTqKXAefc3VANiBSHJIfIw74rBAIQ912QMD3n9LlQ3iwC2pYvLof 3m0GuffWRUiWCOUrSfcFVHo8FIJ52rzxKjs7Zb57yjx+xGTh8OrQ4pDYctH081/Uz9PG zmAQ== X-Gm-Message-State: APjAAAWDjiSikM1Y/3F9JOBIM2tJFx6akccJ54C4WzKg70T7c4x5WCQA R4PCTwB8W3c9Zjw7thD9j/NiqPbdhBdJfSWvxilr2oqGYoqlOyWIRzI/yF67s9qYhPyD1kN8Lso r8qGd1xaTrPyc70Gl2Lq1YZau9cXXQPEhbYRra9IRKod75Dn2U5cIaXESHOIz/8wXhw== X-Received: by 2002:a17:902:f64:: with SMTP id 91mr11963637ply.247.1561735491022; Fri, 28 Jun 2019 08:24:51 -0700 (PDT) X-Received: by 2002:a17:902:f64:: with SMTP id 91mr11963566ply.247.1561735490152; Fri, 28 Jun 2019 08:24:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561735490; cv=none; d=google.com; s=arc-20160816; b=HBn60Ne8Wbjy54Ve4Y/5zDGub95L7sl0vPG5qBnQyFQ5fMUpwcFoMeMApX0ULgKxWa oabs5sxL0RvSX9m4XPILyl3nzkT8ZUX1j2lM/jXwjlv8TYf+GKj/vP+JdcTXkE3eSCMb EiuBZKD1ixJPhjGWLq9jAO475aYLYxR2BbQStef7/zX3jOxmRT3OF1wofUunZ1NQNd84 AiNxr1cgLD2Q4enbTE8zMd7Dh5bwcdkGozuMzYb0hU+NgKDwbQDkBB89piShG96Jk/pC +64cQ9N2K5l5LJj3B23GwCTb2Zo3mQq1vaO4BZ6+IvmtmxcPxaQXkkPtm4b6zSqUB73v pMCw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:mime-version:message-id:date:dkim-signature; bh=ZsiRx6xvzXROMUfoU65xDsRvBFu/S3BDc/iiU2HXWv4=; b=NY8N5AYbhDEFEcO/oWECC+Puy8bibqmfZyzfOnhvvXnNi4mz9x7SVa7Efmdi02Jj2k cnxW46/CfACFJ3caFOxPCpwm9lWlz+5nL+s6KayeAhsP1hVZipYFV4hy7zBV/HY94JtO 0kbtA20pGt3IAiUx0Ap1hsj3MGhiaXW339fPe2H1NVGNoCNGFGqya3/ga0+lmn6BzoY9 v7vDqg2mmbcVD/Cn5MUAa9YitJCMGw3YMw/2C1JyWVWxnO9H2KxyKmJhIezjt24OgaQg CRCausa22jf7eJpCxMMaMuCZ0hX49O4USu2hgEgr+Cjv1d57rymOx5bEwoD9QLCoy4jE uL2Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=BtPgXcHa; spf=pass (google.com: domain of 3qtewxqgkcooetmwqqxnsaasxq.oayxuzgj-yywhmow.ads@flex--shakeelb.bounces.google.com designates 209.85.220.73 as permitted sender) smtp.mailfrom=3QTEWXQgKCOoeTMWQQXNSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--shakeelb.bounces.google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from mail-sor-f73.google.com (mail-sor-f73.google.com. [209.85.220.73]) by mx.google.com with SMTPS id v185sor1047704pgv.10.2019.06.28.08.24.50 for (Google Transport Security); Fri, 28 Jun 2019 08:24:50 -0700 (PDT) Received-SPF: pass (google.com: domain of 3qtewxqgkcooetmwqqxnsaasxq.oayxuzgj-yywhmow.ads@flex--shakeelb.bounces.google.com designates 209.85.220.73 as permitted sender) client-ip=209.85.220.73; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=BtPgXcHa; spf=pass (google.com: domain of 3qtewxqgkcooetmwqqxnsaasxq.oayxuzgj-yywhmow.ads@flex--shakeelb.bounces.google.com designates 209.85.220.73 as permitted sender) smtp.mailfrom=3QTEWXQgKCOoeTMWQQXNSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--shakeelb.bounces.google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:message-id:mime-version:subject:from:to:cc; bh=ZsiRx6xvzXROMUfoU65xDsRvBFu/S3BDc/iiU2HXWv4=; b=BtPgXcHakceSysUUvvTmVSqsGhdHXi1kTosXaG1TL6r/NdCk16Kn1UZudJwQ8FryNT MqMKLaTqfEshncO9iNRq3T2l4bFQs9lSi8YCG2CnMALJaJycGYuXhHwviqJu5AP4riC8 i3YxItLgROyPMkeYXhbQ+04IIo3s28z1I4gbaqa10tstGRhTEsE7azL2mH+zTyglgFCp TB0VyCvjbF3Vt92IjWzM5k50QSqsGJtJ7QBekyVLXcBao3UchX5g5/oua87eS/1oWJUt brlC1HXwtP1z0swdd16RFVvCEJ2o2gDNUQpf/2Rc7F1tgTQ+cEGt2lKpSg+CStlr7gF1 zxUg== X-Google-Smtp-Source: APXvYqzEojWmbhFHT2oTqcaA/Uz0vqGKl0ZGIuOIBoO2lEvVjVcxnQ2cWLOCM1OrJQdFMc9IHU4lEf3YCZuZ6w== X-Received: by 2002:a63:f953:: with SMTP id q19mr9792545pgk.367.1561735489272; Fri, 28 Jun 2019 08:24:49 -0700 (PDT) Date: Fri, 28 Jun 2019 08:24:19 -0700 Message-Id: <20190628152421.198994-1-shakeelb@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.22.0.410.gd8fdbe21b5-goog Subject: [PATCH v4 1/3] mm, oom: refactor dump_tasks for memcg OOMs From: Shakeel Butt To: Johannes Weiner , Michal Hocko , Andrew Morton , Roman Gushchin , David Rientjes Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Shakeel Butt , Tetsuo Handa , Vladimir Davydov , KOSAKI Motohiro , Paul Jackson , Nick Piggin X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP dump_tasks() traverses all the existing processes even for the memcg OOM context which is not only unnecessary but also wasteful. This imposes a long RCU critical section even from a contained context which can be quite disruptive. Change dump_tasks() to be aligned with select_bad_process and use mem_cgroup_scan_tasks to selectively traverse only processes of the target memcg hierarchy during memcg OOM. Signed-off-by: Shakeel Butt Acked-by: Michal Hocko Acked-by: Roman Gushchin Cc: Johannes Weiner Cc: Tetsuo Handa Cc: Vladimir Davydov Cc: David Rientjes Cc: KOSAKI Motohiro Cc: Paul Jackson Cc: Nick Piggin Cc: Andrew Morton --- Changelog since v3: - None Changelog since v2: - Updated the commit message. Changelog since v1: - Divide the patch into two patches. mm/oom_kill.c | 68 ++++++++++++++++++++++++++++++--------------------- 1 file changed, 40 insertions(+), 28 deletions(-) diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 085abc91024d..a940d2aa92d6 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -380,10 +380,38 @@ static void select_bad_process(struct oom_control *oc) } } +static int dump_task(struct task_struct *p, void *arg) +{ + struct oom_control *oc = arg; + struct task_struct *task; + + if (oom_unkillable_task(p, NULL, oc->nodemask)) + return 0; + + task = find_lock_task_mm(p); + if (!task) { + /* + * This is a kthread or all of p's threads have already + * detached their mm's. There's no need to report + * them; they can't be oom killed anyway. + */ + return 0; + } + + pr_info("[%7d] %5d %5d %8lu %8lu %8ld %8lu %5hd %s\n", + task->pid, from_kuid(&init_user_ns, task_uid(task)), + task->tgid, task->mm->total_vm, get_mm_rss(task->mm), + mm_pgtables_bytes(task->mm), + get_mm_counter(task->mm, MM_SWAPENTS), + task->signal->oom_score_adj, task->comm); + task_unlock(task); + + return 0; +} + /** * dump_tasks - dump current memory state of all system tasks - * @memcg: current's memory controller, if constrained - * @nodemask: nodemask passed to page allocator for mempolicy ooms + * @oc: pointer to struct oom_control * * Dumps the current memory state of all eligible tasks. Tasks not in the same * memcg, not in the same cpuset, or bound to a disjoint set of mempolicy nodes @@ -391,37 +419,21 @@ static void select_bad_process(struct oom_control *oc) * State information includes task's pid, uid, tgid, vm size, rss, * pgtables_bytes, swapents, oom_score_adj value, and name. */ -static void dump_tasks(struct mem_cgroup *memcg, const nodemask_t *nodemask) +static void dump_tasks(struct oom_control *oc) { - struct task_struct *p; - struct task_struct *task; - pr_info("Tasks state (memory values in pages):\n"); pr_info("[ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name\n"); - rcu_read_lock(); - for_each_process(p) { - if (oom_unkillable_task(p, memcg, nodemask)) - continue; - task = find_lock_task_mm(p); - if (!task) { - /* - * This is a kthread or all of p's threads have already - * detached their mm's. There's no need to report - * them; they can't be oom killed anyway. - */ - continue; - } + if (is_memcg_oom(oc)) + mem_cgroup_scan_tasks(oc->memcg, dump_task, oc); + else { + struct task_struct *p; - pr_info("[%7d] %5d %5d %8lu %8lu %8ld %8lu %5hd %s\n", - task->pid, from_kuid(&init_user_ns, task_uid(task)), - task->tgid, task->mm->total_vm, get_mm_rss(task->mm), - mm_pgtables_bytes(task->mm), - get_mm_counter(task->mm, MM_SWAPENTS), - task->signal->oom_score_adj, task->comm); - task_unlock(task); + rcu_read_lock(); + for_each_process(p) + dump_task(p, oc); + rcu_read_unlock(); } - rcu_read_unlock(); } static void dump_oom_summary(struct oom_control *oc, struct task_struct *victim) @@ -453,7 +465,7 @@ static void dump_header(struct oom_control *oc, struct task_struct *p) dump_unreclaimable_slab(); } if (sysctl_oom_dump_tasks) - dump_tasks(oc->memcg, oc->nodemask); + dump_tasks(oc); if (p) dump_oom_summary(oc, p); } From patchwork Fri Jun 28 15:24:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shakeel Butt X-Patchwork-Id: 11022657 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CA53414E5 for ; Fri, 28 Jun 2019 15:25:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BB21A287A0 for ; Fri, 28 Jun 2019 15:25:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AF220287B8; Fri, 28 Jun 2019 15:25:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D3B67287A0 for ; Fri, 28 Jun 2019 15:25:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C4E1B8E0005; Fri, 28 Jun 2019 11:25:05 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BFDDB8E0002; Fri, 28 Jun 2019 11:25:05 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AC5DD8E0005; Fri, 28 Jun 2019 11:25:05 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) by kanga.kvack.org (Postfix) with ESMTP id 735748E0002 for ; Fri, 28 Jun 2019 11:25:05 -0400 (EDT) Received: by mail-pl1-f198.google.com with SMTP id 65so3683523plf.16 for ; Fri, 28 Jun 2019 08:25:05 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=SF83LfmJfuCG+/hVMdmXg0ktbCrQc452ZBvHL1Q4HoU=; b=RfJ7Q3EUbeNCnvbmXjfSNkr7a216tX9qfXe4uEqQWsxnrGN9Pzvqu6i3iHekpGSSLl 4iMQMZQG8ZXFnXF3rt/G0i7BYIZE9kJ+jLxBfvvHu+kJK+6ol8Ivf5D/IVRctX/zUAL8 8R+PAmH6RN8Xs3PNgqsOVyIaXK0AzEA8KY+7Z8FHIU+rAMv7UKC1YtM5SahFPTm4i7u4 qfuy4keuyhbzVI2JLtx7+z08F8c0NlHjmNJ+LwnHfYxCtwPYLJ6CBLglBdAe3Wr5Uqut j3OjwI5c9SDd+zOuqAkRJoiScnB1bSkJ+rhJAI39Ci6rbt6cbVIAhvQ+fZ6Y3FKLQ6dP sqMw== X-Gm-Message-State: APjAAAUr1ekwUDJkKoAegGp/p/F3a83m+EkgljsP7p3woxOTvvZw6mPC X7NIqC0EJh00stTZrE+3itULC89ewX/vQhDR+cKRwgZvcdLGyicyaUYEbUB/sIZtd2pHU/25pi/ pgc+kL8ZPZWimnC0QIVm+FarJWxOwqpDCw1R/iQ/iGkFZHSiGX645BU1VmFjDK/98Iw== X-Received: by 2002:a17:902:8bc1:: with SMTP id r1mr12342989plo.42.1561735505068; Fri, 28 Jun 2019 08:25:05 -0700 (PDT) X-Received: by 2002:a17:902:8bc1:: with SMTP id r1mr12342922plo.42.1561735504332; Fri, 28 Jun 2019 08:25:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561735504; cv=none; d=google.com; s=arc-20160816; b=QVQDOwHfsEniwCkk5pjFr7mDmLx9R15S5PatDt4p6eBSaQZb4kxdtrmeUH2fcX7RcF VK0xYx8DOSQrqtk57R/8ThS0XjCMhbTCWR32ybqvTj7ImCbgal4yBnsLXoOVxb3EloJa VtCeXs8PEFWTpfjSjMOiqWUM5QoqN47EBwn+aBYDqv2EpmMTCFkuRnrR3J1HXSnjqHRa EMtTo8as8P2ldFYWPSxO20kqIJsuUnBlno9x6xAYXEtz5CZipMbOp+7tKifATe5NOOPN tFp83l24M6V3UTBHLDGtlzjRI64zpNn/t41Li27XIljnrn4m19xETjBWsHUFRxlXWEbV e+xQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:dkim-signature; bh=SF83LfmJfuCG+/hVMdmXg0ktbCrQc452ZBvHL1Q4HoU=; b=mRmdxinxxR2LyqGm2JpIMPtKHZ4vHlsmsbkPQrXp7i971JtYMl9J8YjBKuV7dfE+kr NEx8Dh3+3LJKyQT+rMaGhK8S1BtrPB/A1Lz6Ca9GyuQFFVrqasLhPorQ+YOJQPM1FOQ2 yQKDWlIQjlRF7+h1+kgCEV4GTSrc8cE2U2s9fnZ969oB8QsN004bgFefb0VCUv6h37Ki yAg1cRFE1/tz6bT1Fi5AUM+GOm13BsNcW69GSSl+ChkowpGUjUSRRNmdOkA81F3/HMMI dw4eceX2Cjk80B9aBmUN822EvqngpORhTLP8qw+XHxxtawev+ruSYt8g1+48u5uUwnuz gBJg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=cKMaTa6F; spf=pass (google.com: domain of 3tzewxqgkcpgxmfpjjqglttlqj.htrqnsz2-rrp0fhp.twl@flex--shakeelb.bounces.google.com designates 209.85.220.73 as permitted sender) smtp.mailfrom=3TzEWXQgKCPgxmfpjjqglttlqj.htrqnsz2-rrp0fhp.twl@flex--shakeelb.bounces.google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from mail-sor-f73.google.com (mail-sor-f73.google.com. [209.85.220.73]) by mx.google.com with SMTPS id r3sor1029819pgj.74.2019.06.28.08.25.04 for (Google Transport Security); Fri, 28 Jun 2019 08:25:04 -0700 (PDT) Received-SPF: pass (google.com: domain of 3tzewxqgkcpgxmfpjjqglttlqj.htrqnsz2-rrp0fhp.twl@flex--shakeelb.bounces.google.com designates 209.85.220.73 as permitted sender) client-ip=209.85.220.73; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=cKMaTa6F; spf=pass (google.com: domain of 3tzewxqgkcpgxmfpjjqglttlqj.htrqnsz2-rrp0fhp.twl@flex--shakeelb.bounces.google.com designates 209.85.220.73 as permitted sender) smtp.mailfrom=3TzEWXQgKCPgxmfpjjqglttlqj.htrqnsz2-rrp0fhp.twl@flex--shakeelb.bounces.google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=SF83LfmJfuCG+/hVMdmXg0ktbCrQc452ZBvHL1Q4HoU=; b=cKMaTa6FyKq9p1/jTTOlIGliwsN1WZFXEqkbfv1kAhDxqggJRSsT5IuOMrtlJmkQoG ybj/LHFkG3bDggkE6H/W0roOsKHxQMKe2U4G4yXaZ83TdCvTAmJfpxvnIrvtAW58cWub 92BdmVdgqHbH8XbNnztYJXiQCoZFL71dW9vOCUf7hmBvriybZHBRUHs5+rOeWzmG6kNR YRzo8llnaHlW4gKfgr1wGff4hyPw8P+NPCOcOhkhuVHnKPIlMd5xmgFzrIkPCCbvK1yW 1Xwjx0v7hxE+6ZRnyUG9Dda8Rjf6bCV8Ix4073MZ7DfOmOSWMNN5s4jh4vp80yTUIsmj Bgdw== X-Google-Smtp-Source: APXvYqzBE6kCNMUP+NzxA6JkiXzQluu8P7gxuTbCoe9j2uvcyvnTmKTB1Rj+MyL14Al+uuvcisZQ8tIUq5Ghyg== X-Received: by 2002:a63:4c15:: with SMTP id z21mr9448936pga.87.1561735503555; Fri, 28 Jun 2019 08:25:03 -0700 (PDT) Date: Fri, 28 Jun 2019 08:24:20 -0700 In-Reply-To: <20190628152421.198994-1-shakeelb@google.com> Message-Id: <20190628152421.198994-2-shakeelb@google.com> Mime-Version: 1.0 References: <20190628152421.198994-1-shakeelb@google.com> X-Mailer: git-send-email 2.22.0.410.gd8fdbe21b5-goog Subject: [PATCH v4 2/3] mm, oom: remove redundant task_in_mem_cgroup() check From: Shakeel Butt To: Johannes Weiner , Michal Hocko , Andrew Morton , Roman Gushchin , David Rientjes Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Shakeel Butt , Tetsuo Handa , KOSAKI Motohiro , Nick Piggin , Paul Jackson , Vladimir Davydov X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP oom_unkillable_task() can be called from three different contexts i.e. global OOM, memcg OOM and oom_score procfs interface. At the moment oom_unkillable_task() does a task_in_mem_cgroup() check on the given process. Since there is no reason to perform task_in_mem_cgroup() check for global OOM and oom_score procfs interface, those contexts provide NULL memcg and skips the task_in_mem_cgroup() check. However for memcg OOM context, the oom_unkillable_task() is always called from mem_cgroup_scan_tasks() and thus task_in_mem_cgroup() check becomes redundant and effectively dead code. So, just remove the task_in_mem_cgroup() check altogether. Signed-off-by: Shakeel Butt Signed-off-by: Tetsuo Handa Acked-by: Michal Hocko Acked-by: Roman Gushchin Cc: David Rientjes Cc: Johannes Weiner Cc: KOSAKI Motohiro Cc: Nick Piggin Cc: Paul Jackson Cc: Vladimir Davydov Cc: Andrew Morton --- Changelog since v3: - Update commit message. Changelog since v2: - Further divided the patch into two patches. - Incorporated the task_in_mem_cgroup() from Tetsuo. Changelog since v1: - Divide the patch into two patches. fs/proc/base.c | 2 +- include/linux/memcontrol.h | 7 ------- include/linux/oom.h | 2 +- mm/memcontrol.c | 26 -------------------------- mm/oom_kill.c | 19 +++++++------------ 5 files changed, 9 insertions(+), 47 deletions(-) diff --git a/fs/proc/base.c b/fs/proc/base.c index b8d5d100ed4a..5eacce5e924a 100644 --- a/fs/proc/base.c +++ b/fs/proc/base.c @@ -532,7 +532,7 @@ static int proc_oom_score(struct seq_file *m, struct pid_namespace *ns, unsigned long totalpages = totalram_pages() + total_swap_pages; unsigned long points = 0; - points = oom_badness(task, NULL, NULL, totalpages) * + points = oom_badness(task, NULL, totalpages) * 1000 / totalpages; seq_printf(m, "%lu\n", points); diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 9abf31bbe53a..2cbce1fe7780 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -407,7 +407,6 @@ static inline struct lruvec *mem_cgroup_lruvec(struct pglist_data *pgdat, struct lruvec *mem_cgroup_page_lruvec(struct page *, struct pglist_data *); -bool task_in_mem_cgroup(struct task_struct *task, struct mem_cgroup *memcg); struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p); struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm); @@ -896,12 +895,6 @@ static inline bool mm_match_cgroup(struct mm_struct *mm, return true; } -static inline bool task_in_mem_cgroup(struct task_struct *task, - const struct mem_cgroup *memcg) -{ - return true; -} - static inline struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm) { return NULL; diff --git a/include/linux/oom.h b/include/linux/oom.h index d07992009265..b75104690311 100644 --- a/include/linux/oom.h +++ b/include/linux/oom.h @@ -108,7 +108,7 @@ static inline vm_fault_t check_stable_address_space(struct mm_struct *mm) bool __oom_reap_task_mm(struct mm_struct *mm); extern unsigned long oom_badness(struct task_struct *p, - struct mem_cgroup *memcg, const nodemask_t *nodemask, + const nodemask_t *nodemask, unsigned long totalpages); extern bool out_of_memory(struct oom_control *oc); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 7532ddcf31b2..b3f67a6b6527 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1259,32 +1259,6 @@ void mem_cgroup_update_lru_size(struct lruvec *lruvec, enum lru_list lru, *lru_size += nr_pages; } -bool task_in_mem_cgroup(struct task_struct *task, struct mem_cgroup *memcg) -{ - struct mem_cgroup *task_memcg; - struct task_struct *p; - bool ret; - - p = find_lock_task_mm(task); - if (p) { - task_memcg = get_mem_cgroup_from_mm(p->mm); - task_unlock(p); - } else { - /* - * All threads may have already detached their mm's, but the oom - * killer still needs to detect if they have already been oom - * killed to prevent needlessly killing additional tasks. - */ - rcu_read_lock(); - task_memcg = mem_cgroup_from_task(task); - css_get(&task_memcg->css); - rcu_read_unlock(); - } - ret = mem_cgroup_is_descendant(task_memcg, memcg); - css_put(&task_memcg->css); - return ret; -} - /** * mem_cgroup_margin - calculate chargeable space of a memory cgroup * @memcg: the memory cgroup diff --git a/mm/oom_kill.c b/mm/oom_kill.c index a940d2aa92d6..eff879acc886 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -153,17 +153,13 @@ static inline bool is_memcg_oom(struct oom_control *oc) /* return true if the task is not adequate as candidate victim task. */ static bool oom_unkillable_task(struct task_struct *p, - struct mem_cgroup *memcg, const nodemask_t *nodemask) + const nodemask_t *nodemask) { if (is_global_init(p)) return true; if (p->flags & PF_KTHREAD) return true; - /* When mem_cgroup_out_of_memory() and p is not member of the group */ - if (memcg && !task_in_mem_cgroup(p, memcg)) - return true; - /* p may not have freeable memory in nodemask */ if (!has_intersects_mems_allowed(p, nodemask)) return true; @@ -194,20 +190,19 @@ static bool is_dump_unreclaim_slabs(void) * oom_badness - heuristic function to determine which candidate task to kill * @p: task struct of which task we should calculate * @totalpages: total present RAM allowed for page allocation - * @memcg: task's memory controller, if constrained * @nodemask: nodemask passed to page allocator for mempolicy ooms * * The heuristic for determining which task to kill is made to be as simple and * predictable as possible. The goal is to return the highest value for the * task consuming the most memory to avoid subsequent oom failures. */ -unsigned long oom_badness(struct task_struct *p, struct mem_cgroup *memcg, +unsigned long oom_badness(struct task_struct *p, const nodemask_t *nodemask, unsigned long totalpages) { long points; long adj; - if (oom_unkillable_task(p, memcg, nodemask)) + if (oom_unkillable_task(p, nodemask)) return 0; p = find_lock_task_mm(p); @@ -318,7 +313,7 @@ static int oom_evaluate_task(struct task_struct *task, void *arg) struct oom_control *oc = arg; unsigned long points; - if (oom_unkillable_task(task, NULL, oc->nodemask)) + if (oom_unkillable_task(task, oc->nodemask)) goto next; /* @@ -342,7 +337,7 @@ static int oom_evaluate_task(struct task_struct *task, void *arg) goto select; } - points = oom_badness(task, NULL, oc->nodemask, oc->totalpages); + points = oom_badness(task, oc->nodemask, oc->totalpages); if (!points || points < oc->chosen_points) goto next; @@ -385,7 +380,7 @@ static int dump_task(struct task_struct *p, void *arg) struct oom_control *oc = arg; struct task_struct *task; - if (oom_unkillable_task(p, NULL, oc->nodemask)) + if (oom_unkillable_task(p, oc->nodemask)) return 0; task = find_lock_task_mm(p); @@ -1083,7 +1078,7 @@ bool out_of_memory(struct oom_control *oc) check_panic_on_oom(oc); if (!is_memcg_oom(oc) && sysctl_oom_kill_allocating_task && - current->mm && !oom_unkillable_task(current, NULL, oc->nodemask) && + current->mm && !oom_unkillable_task(current, oc->nodemask) && current->signal->oom_score_adj != OOM_SCORE_ADJ_MIN) { get_task_struct(current); oc->chosen = current; From patchwork Fri Jun 28 15:24:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shakeel Butt X-Patchwork-Id: 11022659 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F2CFD14E5 for ; Fri, 28 Jun 2019 15:25:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E12DC287A0 for ; Fri, 28 Jun 2019 15:25:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D53AB287B8; Fri, 28 Jun 2019 15:25:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BA896287A0 for ; Fri, 28 Jun 2019 15:25:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C4E138E0006; Fri, 28 Jun 2019 11:25:14 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BFF228E0002; Fri, 28 Jun 2019 11:25:14 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AECB18E0006; Fri, 28 Jun 2019 11:25:14 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com [209.85.160.197]) by kanga.kvack.org (Postfix) with ESMTP id 8B6278E0002 for ; Fri, 28 Jun 2019 11:25:14 -0400 (EDT) Received: by mail-qt1-f197.google.com with SMTP id s22so6361975qtb.22 for ; Fri, 28 Jun 2019 08:25:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=T6Egj8bd3FQf5cgQJqXb3ke+fkvsG+q7H88Nz1kVqhw=; b=T9Nv2U+JpPw7fbOUZ4lUJ4nbigzVe0IPW/E4FGsjV67AvE1h7zZYQmaP92FvVKsQCM q66ixLSkvUOziQlL/WLNP9SmGzcM/5LkwOuJTGbKv99785Gow9FXKUvfHaP5OT4uP+xa S/669rLOFTyq2er73ZlPd8Xw3U/jqBYaPaNLCqD/lV9x0X/GiVb+hTThttXXE9X9RwTZ V8I6a/mpk4aKX8Oh4AZk+6ZJyJaw/ZpyiVmWzq0vngij3G+kT1msF+17bzG69xLfeNEK focPQrSnW6YWM6qBMM/MFHvBmTHTddifVW8tIXtl5l74P4R/F6Va29/H0/FLelIGRUxH Ojng== X-Gm-Message-State: APjAAAU7PRuym7KRwjbX8Gfc4mxt9hLjzkpQXdHdiowqA4CBxhMpfeeJ ZnA+823bLI2c+oEFezqPAu39PoT5Ko0Sfv3iSq1XBJKxt6o+dVf62LjuaSPBxv3lMWcYjdARlYH 3EFofQYvU54BwwOJjGBHjw9kmna/lSf7HjOzLrEPN1UGIecsveYITxZiQaHXTGIQ2Ag== X-Received: by 2002:ac8:36b9:: with SMTP id a54mr8789835qtc.300.1561735514253; Fri, 28 Jun 2019 08:25:14 -0700 (PDT) X-Received: by 2002:ac8:36b9:: with SMTP id a54mr8789743qtc.300.1561735513153; Fri, 28 Jun 2019 08:25:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561735513; cv=none; d=google.com; s=arc-20160816; b=pwylvZWnvcpGK/k3uiPU0vOicFBm7y+eF5ufGP/IiAQdxrTTRcPsk7l+wC05qqJPif /eUFjQeV6sN0N5p/8fam4dxildHsyYF36vkfSeePj3GnJ2VoUPlwaDhZeC+ylgoF2pMm xIEtZwFJmE7LgmbDi09F55HCoCRAAHV6wB7BRMmhm8AWB0RToZJvE+0aL4ltg7JTRD4i zMMUjJZp00yuHcN+ky+PNFTr8fvdt93QajhRdWWW9I+aTv0aQ56op8Wp1CHi3pCb4nxl 2x1dfL1lzhdVzO2YzeOXnX+2I6aznkEHmBnmM3x2+joqS2aD96/ERcHNYQ7Z7v+M9Bw1 45gA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:dkim-signature; bh=T6Egj8bd3FQf5cgQJqXb3ke+fkvsG+q7H88Nz1kVqhw=; b=OSDkoW8xm5dRn6sM64Na+Ju0CNAStqyqoTvOOW2zK6U4M0f+VVZcZLsCd5XRlEbxw/ C44omcYN5OKi8q2482dO49e3Y8ngM8sl12xs/xbK6sTwzjF5BUAuGputSSCtEinHIao+ oh2quS41ECec3aDTv1emcM3e5VQ5lQtHYO2ef5QNJCzUUSlo31ck56FnpbTtcczALhXn cf/2410965aUYSKyNsb56ONTHoRvp2zWpDw0KkawaWrP5VLyh1ec6iGzuCZNtsExaZVs ly6vABrowJUoAYRKXeaBj+QBG9hy72aYqr6HnClXGK+tLWlpa8clsRkB/3A/VK6B7jiM myTg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=NRDDaciC; spf=pass (google.com: domain of 3wdewxqgkcamvkdnhhoejrrjoh.frpolqx0-ppnydfn.ruj@flex--shakeelb.bounces.google.com designates 209.85.220.73 as permitted sender) smtp.mailfrom=3WDEWXQgKCAMvkdnhhoejrrjoh.frpolqx0-ppnydfn.ruj@flex--shakeelb.bounces.google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from mail-sor-f73.google.com (mail-sor-f73.google.com. [209.85.220.73]) by mx.google.com with SMTPS id j123sor1732239qkd.156.2019.06.28.08.25.13 for (Google Transport Security); Fri, 28 Jun 2019 08:25:13 -0700 (PDT) Received-SPF: pass (google.com: domain of 3wdewxqgkcamvkdnhhoejrrjoh.frpolqx0-ppnydfn.ruj@flex--shakeelb.bounces.google.com designates 209.85.220.73 as permitted sender) client-ip=209.85.220.73; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=NRDDaciC; spf=pass (google.com: domain of 3wdewxqgkcamvkdnhhoejrrjoh.frpolqx0-ppnydfn.ruj@flex--shakeelb.bounces.google.com designates 209.85.220.73 as permitted sender) smtp.mailfrom=3WDEWXQgKCAMvkdnhhoejrrjoh.frpolqx0-ppnydfn.ruj@flex--shakeelb.bounces.google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=T6Egj8bd3FQf5cgQJqXb3ke+fkvsG+q7H88Nz1kVqhw=; b=NRDDaciC2K9HkQoNqDTIiPWrPDmsFURh+YXAm9/Ic3cvJ4z+RkrrZHHu+r/91MT7wA Qp+CUhv83wB867Zc+nZS0LIKp67i7upv4ecrjtYsQNktL8nOragF0XPCf6KsvXntdoTq Gl+H86PTSK+COlERiF82cbw8jOR56HlQA7euFt7cB70grIOI2Gj2iZVYb1qVBYujvHVH GdLvuLDWE+wQuImpAxMLOTVjXI4SAR19BWkSj8p4D269UD3LffAb/9xPN0QsY38A+FP/ Kf/A3p4+5jqBRUS6+gkur902psLGPrj53wlmxDqwRTxzCj+qF3q66wEAJWehVFVJg5Uc 9GOQ== X-Google-Smtp-Source: APXvYqzmecZ2pT9n7DZDTuxERplpng+TH0Vx10pqYyj7v0knzbz97H4tqwPkjz3joCgD2lCWA5SAtzg9idXWZw== X-Received: by 2002:a37:a0d:: with SMTP id 13mr9089376qkk.273.1561735512733; Fri, 28 Jun 2019 08:25:12 -0700 (PDT) Date: Fri, 28 Jun 2019 08:24:21 -0700 In-Reply-To: <20190628152421.198994-1-shakeelb@google.com> Message-Id: <20190628152421.198994-3-shakeelb@google.com> Mime-Version: 1.0 References: <20190628152421.198994-1-shakeelb@google.com> X-Mailer: git-send-email 2.22.0.410.gd8fdbe21b5-goog Subject: [PATCH v4 3/3] oom: decouple mems_allowed from oom_unkillable_task From: Shakeel Butt To: Johannes Weiner , Michal Hocko , Andrew Morton , Roman Gushchin , David Rientjes Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Shakeel Butt , syzbot+d0fc9d3c166bc5e4a94b@syzkaller.appspotmail.com, KOSAKI Motohiro , Nick Piggin , Paul Jackson , Tetsuo Handa , Vladimir Davydov X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP ef08e3b4981a ("[PATCH] cpusets: confine oom_killer to mem_exclusive cpuset") introduces a heuristic where a potential oom-killer victim is skipped if the intersection of the potential victim and the current (the process triggered the oom) is empty based on the reason that killing such victim most probably will not help the current allocating process. However the commit 7887a3da753e ("[PATCH] oom: cpuset hint") changed the heuristic to just decrease the oom_badness scores of such potential victim based on the reason that the cpuset of such processes might have changed and previously they may have allocated memory on mems where the current allocating process can allocate from. Unintentionally 7887a3da753e ("[PATCH] oom: cpuset hint") introduced a side effect as the oom_badness is also exposed to the user space through /proc/[pid]/oom_score, so, readers with different cpusets can read different oom_score of the same process. Later 6cf86ac6f36b ("oom: filter tasks not sharing the same cpuset") fixed the side effect introduced by 7887a3da753e by moving the cpuset intersection back to only oom-killer context and out of oom_badness. However the combination of ab290adbaf8f ("oom: make oom_unkillable_task() helper function") and 26ebc984913b ("oom: /proc//oom_score treat kernel thread honestly") unintentionally brought back the cpuset intersection check into the oom_badness calculation function. Other than doing cpuset/mempolicy intersection from oom_badness, the memcg oom context is also doing cpuset/mempolicy intersection which is quite wrong and is caught by syzcaller with the following report: kasan: CONFIG_KASAN_INLINE enabled kasan: GPF could be caused by NULL-ptr deref or user memory access general protection fault: 0000 [#1] PREEMPT SMP KASAN CPU: 0 PID: 28426 Comm: syz-executor.5 Not tainted 5.2.0-rc3-next-20190607 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 RIP: 0010:__read_once_size include/linux/compiler.h:194 [inline] RIP: 0010:has_intersects_mems_allowed mm/oom_kill.c:84 [inline] RIP: 0010:oom_unkillable_task mm/oom_kill.c:168 [inline] RIP: 0010:oom_unkillable_task+0x180/0x400 mm/oom_kill.c:155 Code: c1 ea 03 80 3c 02 00 0f 85 80 02 00 00 4c 8b a3 10 07 00 00 48 b8 00 00 00 00 00 fc ff df 4d 8d 74 24 10 4c 89 f2 48 c1 ea 03 <80> 3c 02 00 0f 85 67 02 00 00 49 8b 44 24 10 4c 8d a0 68 fa ff ff RSP: 0018:ffff888000127490 EFLAGS: 00010a03 RAX: dffffc0000000000 RBX: ffff8880a4cd5438 RCX: ffffffff818dae9c RDX: 100000000c3cc602 RSI: ffffffff818dac8d RDI: 0000000000000001 RBP: ffff8880001274d0 R08: ffff888000086180 R09: ffffed1015d26be0 R10: ffffed1015d26bdf R11: ffff8880ae935efb R12: 8000000061e63007 R13: 0000000000000000 R14: 8000000061e63017 R15: 1ffff11000024ea6 FS: 00005555561f5940(0000) GS:ffff8880ae800000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000607304 CR3: 000000009237e000 CR4: 00000000001426f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000600 Call Trace: oom_evaluate_task+0x49/0x520 mm/oom_kill.c:321 mem_cgroup_scan_tasks+0xcc/0x180 mm/memcontrol.c:1169 select_bad_process mm/oom_kill.c:374 [inline] out_of_memory mm/oom_kill.c:1088 [inline] out_of_memory+0x6b2/0x1280 mm/oom_kill.c:1035 mem_cgroup_out_of_memory+0x1ca/0x230 mm/memcontrol.c:1573 mem_cgroup_oom mm/memcontrol.c:1905 [inline] try_charge+0xfbe/0x1480 mm/memcontrol.c:2468 mem_cgroup_try_charge+0x24d/0x5e0 mm/memcontrol.c:6073 mem_cgroup_try_charge_delay+0x1f/0xa0 mm/memcontrol.c:6088 do_huge_pmd_wp_page_fallback+0x24f/0x1680 mm/huge_memory.c:1201 do_huge_pmd_wp_page+0x7fc/0x2160 mm/huge_memory.c:1359 wp_huge_pmd mm/memory.c:3793 [inline] __handle_mm_fault+0x164c/0x3eb0 mm/memory.c:4006 handle_mm_fault+0x3b7/0xa90 mm/memory.c:4053 do_user_addr_fault arch/x86/mm/fault.c:1455 [inline] __do_page_fault+0x5ef/0xda0 arch/x86/mm/fault.c:1521 do_page_fault+0x71/0x57d arch/x86/mm/fault.c:1552 page_fault+0x1e/0x30 arch/x86/entry/entry_64.S:1156 RIP: 0033:0x400590 Code: 06 e9 49 01 00 00 48 8b 44 24 10 48 0b 44 24 28 75 1f 48 8b 14 24 48 8b 7c 24 20 be 04 00 00 00 e8 f5 56 00 00 48 8b 74 24 08 <89> 06 e9 1e 01 00 00 48 8b 44 24 08 48 8b 14 24 be 04 00 00 00 8b RSP: 002b:00007fff7bc49780 EFLAGS: 00010206 RAX: 0000000000000001 RBX: 0000000000760000 RCX: 0000000000000000 RDX: 0000000000000000 RSI: 000000002000cffc RDI: 0000000000000001 RBP: fffffffffffffffe R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000075 R11: 0000000000000246 R12: 0000000000760008 R13: 00000000004c55f2 R14: 0000000000000000 R15: 00007fff7bc499b0 Modules linked in: ---[ end trace a65689219582ffff ]--- RIP: 0010:__read_once_size include/linux/compiler.h:194 [inline] RIP: 0010:has_intersects_mems_allowed mm/oom_kill.c:84 [inline] RIP: 0010:oom_unkillable_task mm/oom_kill.c:168 [inline] RIP: 0010:oom_unkillable_task+0x180/0x400 mm/oom_kill.c:155 Code: c1 ea 03 80 3c 02 00 0f 85 80 02 00 00 4c 8b a3 10 07 00 00 48 b8 00 00 00 00 00 fc ff df 4d 8d 74 24 10 4c 89 f2 48 c1 ea 03 <80> 3c 02 00 0f 85 67 02 00 00 49 8b 44 24 10 4c 8d a0 68 fa ff ff RSP: 0018:ffff888000127490 EFLAGS: 00010a03 RAX: dffffc0000000000 RBX: ffff8880a4cd5438 RCX: ffffffff818dae9c RDX: 100000000c3cc602 RSI: ffffffff818dac8d RDI: 0000000000000001 RBP: ffff8880001274d0 R08: ffff888000086180 R09: ffffed1015d26be0 R10: ffffed1015d26bdf R11: ffff8880ae935efb R12: 8000000061e63007 R13: 0000000000000000 R14: 8000000061e63017 R15: 1ffff11000024ea6 FS: 00005555561f5940(0000) GS:ffff8880ae800000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000001b2f823000 CR3: 000000009237e000 CR4: 00000000001426f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000600 The fix is to decouple the cpuset/mempolicy intersection check from oom_unkillable_task() and make sure cpuset/mempolicy intersection check is only done in the global oom context. Signed-off-by: Shakeel Butt Reported-by: syzbot+d0fc9d3c166bc5e4a94b@syzkaller.appspotmail.com Acked-by: Michal Hocko Acked-by: Roman Gushchin Cc: David Rientjes Cc: Johannes Weiner Cc: KOSAKI Motohiro Cc: Nick Piggin Cc: Paul Jackson Cc: Tetsuo Handa Cc: Vladimir Davydov Cc: Andrew Morton --- Changelog since v3: - Changed function name and update comment. Changelog since v2: - Further divided the patch into two patches. - More cleaned version. Changelog since v1: - Divide the patch into two patches. fs/proc/base.c | 3 +-- include/linux/oom.h | 1 - mm/oom_kill.c | 57 +++++++++++++++++++++++++-------------------- 3 files changed, 33 insertions(+), 28 deletions(-) diff --git a/fs/proc/base.c b/fs/proc/base.c index 5eacce5e924a..57b7a0d75ef5 100644 --- a/fs/proc/base.c +++ b/fs/proc/base.c @@ -532,8 +532,7 @@ static int proc_oom_score(struct seq_file *m, struct pid_namespace *ns, unsigned long totalpages = totalram_pages() + total_swap_pages; unsigned long points = 0; - points = oom_badness(task, NULL, totalpages) * - 1000 / totalpages; + points = oom_badness(task, totalpages) * 1000 / totalpages; seq_printf(m, "%lu\n", points); return 0; diff --git a/include/linux/oom.h b/include/linux/oom.h index b75104690311..c696c265f019 100644 --- a/include/linux/oom.h +++ b/include/linux/oom.h @@ -108,7 +108,6 @@ static inline vm_fault_t check_stable_address_space(struct mm_struct *mm) bool __oom_reap_task_mm(struct mm_struct *mm); extern unsigned long oom_badness(struct task_struct *p, - const nodemask_t *nodemask, unsigned long totalpages); extern bool out_of_memory(struct oom_control *oc); diff --git a/mm/oom_kill.c b/mm/oom_kill.c index eff879acc886..95872bdfec4e 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -64,21 +64,33 @@ int sysctl_oom_dump_tasks = 1; */ DEFINE_MUTEX(oom_lock); +static inline bool is_memcg_oom(struct oom_control *oc) +{ + return oc->memcg != NULL; +} + #ifdef CONFIG_NUMA /** - * has_intersects_mems_allowed() - check task eligiblity for kill + * oom_cpuset_eligible() - check task eligiblity for kill * @start: task struct of which task to consider * @mask: nodemask passed to page allocator for mempolicy ooms * * Task eligibility is determined by whether or not a candidate task, @tsk, * shares the same mempolicy nodes as current if it is bound by such a policy * and whether or not it has the same set of allowed cpuset nodes. + * + * This function is assuming oom-killer context and 'current' has triggered + * the oom-killer. */ -static bool has_intersects_mems_allowed(struct task_struct *start, - const nodemask_t *mask) +static bool oom_cpuset_eligible(struct task_struct *start, + struct oom_control *oc) { struct task_struct *tsk; bool ret = false; + const nodemask_t *mask = oc->nodemask; + + if (is_memcg_oom(oc)) + return true; rcu_read_lock(); for_each_thread(start, tsk) { @@ -105,8 +117,7 @@ static bool has_intersects_mems_allowed(struct task_struct *start, return ret; } #else -static bool has_intersects_mems_allowed(struct task_struct *tsk, - const nodemask_t *mask) +static bool oom_cpuset_eligible(struct task_struct *tsk, struct oom_control *oc) { return true; } @@ -146,24 +157,13 @@ static inline bool is_sysrq_oom(struct oom_control *oc) return oc->order == -1; } -static inline bool is_memcg_oom(struct oom_control *oc) -{ - return oc->memcg != NULL; -} - /* return true if the task is not adequate as candidate victim task. */ -static bool oom_unkillable_task(struct task_struct *p, - const nodemask_t *nodemask) +static bool oom_unkillable_task(struct task_struct *p) { if (is_global_init(p)) return true; if (p->flags & PF_KTHREAD) return true; - - /* p may not have freeable memory in nodemask */ - if (!has_intersects_mems_allowed(p, nodemask)) - return true; - return false; } @@ -190,19 +190,17 @@ static bool is_dump_unreclaim_slabs(void) * oom_badness - heuristic function to determine which candidate task to kill * @p: task struct of which task we should calculate * @totalpages: total present RAM allowed for page allocation - * @nodemask: nodemask passed to page allocator for mempolicy ooms * * The heuristic for determining which task to kill is made to be as simple and * predictable as possible. The goal is to return the highest value for the * task consuming the most memory to avoid subsequent oom failures. */ -unsigned long oom_badness(struct task_struct *p, - const nodemask_t *nodemask, unsigned long totalpages) +unsigned long oom_badness(struct task_struct *p, unsigned long totalpages) { long points; long adj; - if (oom_unkillable_task(p, nodemask)) + if (oom_unkillable_task(p)) return 0; p = find_lock_task_mm(p); @@ -313,7 +311,11 @@ static int oom_evaluate_task(struct task_struct *task, void *arg) struct oom_control *oc = arg; unsigned long points; - if (oom_unkillable_task(task, oc->nodemask)) + if (oom_unkillable_task(task)) + goto next; + + /* p may not have freeable memory in nodemask */ + if (!is_memcg_oom(oc) && !oom_cpuset_eligible(task, oc)) goto next; /* @@ -337,7 +339,7 @@ static int oom_evaluate_task(struct task_struct *task, void *arg) goto select; } - points = oom_badness(task, oc->nodemask, oc->totalpages); + points = oom_badness(task, oc->totalpages); if (!points || points < oc->chosen_points) goto next; @@ -380,7 +382,11 @@ static int dump_task(struct task_struct *p, void *arg) struct oom_control *oc = arg; struct task_struct *task; - if (oom_unkillable_task(p, oc->nodemask)) + if (oom_unkillable_task(p)) + return 0; + + /* p may not have freeable memory in nodemask */ + if (!is_memcg_oom(oc) && !oom_cpuset_eligible(p, oc)) return 0; task = find_lock_task_mm(p); @@ -1078,7 +1084,8 @@ bool out_of_memory(struct oom_control *oc) check_panic_on_oom(oc); if (!is_memcg_oom(oc) && sysctl_oom_kill_allocating_task && - current->mm && !oom_unkillable_task(current, oc->nodemask) && + current->mm && !oom_unkillable_task(current) && + oom_cpuset_eligible(current, oc) && current->signal->oom_score_adj != OOM_SCORE_ADJ_MIN) { get_task_struct(current); oc->chosen = current;