From patchwork Tue Jul 10 10:19:47 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?56a56Iif6ZSu?= X-Patchwork-Id: 10516793 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id E8CD6603D7 for ; Tue, 10 Jul 2018 10:20:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C5D8828B4D for ; Tue, 10 Jul 2018 10:20:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B64BD28CF3; Tue, 10 Jul 2018 10:20:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8794428B4D for ; Tue, 10 Jul 2018 10:20:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7FC1C6B0008; Tue, 10 Jul 2018 06:20:02 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7AAF96B000C; Tue, 10 Jul 2018 06:20:02 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 672B26B0273; Tue, 10 Jul 2018 06:20:02 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf0-f199.google.com (mail-pf0-f199.google.com [209.85.192.199]) by kanga.kvack.org (Postfix) with ESMTP id 284246B0008 for ; Tue, 10 Jul 2018 06:20:02 -0400 (EDT) Received: by mail-pf0-f199.google.com with SMTP id q21-v6so13718404pff.4 for ; Tue, 10 Jul 2018 03:20:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id; bh=58+cdJn7QTM1mvrc2lJA0WwYS3fv8SZeDWXEzeJaI6A=; b=Iv0ZTy4td288iAWWyZQlTofhdpt+QgLuNu48PbVEjkh/Fa7zsRxEGN/pBjjV09ClRz Mu7EOHaD4S3LU58vNIpKMBiBmmMaN+w97nLW4rQeh4QqLdlPBteycU0hb8CmNn4ZLHpo KllpR7bddf9vjK7EIdA57pKbyNwVqnkp9bS7v5N6ry9N8dwoDA+vpqQOdrtuH2lIdHZn jTOtowkUXVur6FFodeCUeKu5+H4tuFs5AlkWUOSsxnK7vMp8A1+V/zaosxC2LU7Ig/wH K/gZE/z9OKFZiduUlhJ5KdMBv/awVaEe5z9NU4k4mcXgrRC/ZbFMkdWR3J7IBUplU6Py TX8g== X-Gm-Message-State: APt69E1t/GM7S/PFUvqZ1fwzgdhbGkr/LPqor4ru/KFjPVOSl2//q8JA I2HsChPNMHCCfFuyuqFZgKYjtSZK+dUySci87ziz+urUL7o4SXrNcTy4Bq+itvnMjRAQwUpcGsY uguSfoPGGjZGhzW5YBXRU/jnWyP2W+4m45pe09XYE8uSf2hKqO5ZWum80PWpi9X52eg8shEK0VY EjRZnyrGcet/1d2t0uPjCZmZWV5wvOKm5eAH8aLV9ypg8hjTsHjE7vmc4utWFeOccGoqsU7TWcI 9Nn3sS6y1X0P/gJ7eShX0JNIfxOyqZwMZlZOXcmUZ89DUb1xB7qUvkBQr6GQvqsBTvsBltwnl3Q GUt2pQAHZ8voQvICYqQnn2oEiHKps9ZEvR7KHkpoZBUIfOzw3Qj23IvTho69opaG+OQu58KgSJ4 s X-Received: by 2002:a17:902:d218:: with SMTP id t24-v6mr24375817ply.63.1531218001783; Tue, 10 Jul 2018 03:20:01 -0700 (PDT) X-Received: by 2002:a17:902:d218:: with SMTP id t24-v6mr24375769ply.63.1531218000622; Tue, 10 Jul 2018 03:20:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531218000; cv=none; d=google.com; s=arc-20160816; b=aij289dUUWvQANgG91bR34LZVwu4vj7gwiWN/OhatEB2G/vHKOlqLzFG/4E3RVDVtC C7oAavpGbP2UV17TKL4U2Y1awA/HGoMy4lmaAlWZVppXhnM2/xpghpHBju2mpiIaWkqe +jADL2jv/sMT79jkdea5s5zF4TWkWIcNOXK3EvRt94mECbzbnvA3XLNNCMPVvEiGlaOA KK6LOtq4m5EunU1eOmLNKVieiewcVNXhGnNY7+e1UEAzgeGcRkX4mo+DTnnVd7DhIk6Z xuP0xJlBmal/WROnLLiNPXiarUR5Otvc2EbyHS5zNCt/vNf58VWOuSnXC3Ny3fusuY07 lkFg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:date:subject:cc:to:from:dkim-signature :arc-authentication-results; bh=58+cdJn7QTM1mvrc2lJA0WwYS3fv8SZeDWXEzeJaI6A=; b=QkXgfCYtoRaBZ7lbW+3OYuLWPZuKQhgZOUN3gAcCpyl4KK8o6YTooeiL2btflnYt39 oh2kluL1meQ7Ob4GzF8QNIQU5TEqtRpEVkRPa/EsPR6BsUSLXKcUuixnwUyG/Wka7WDH agiCXaKRyMGpaTUs69TK7F2lddJWs42aPuosjIyXg9qFRr2eSgfXoDk3V3QI01xDiQjZ P5l+IoY0HfUXplDVjvoxDN1N24mRg5t4VMtPtPXlPnjcTKQvDe3v9HFnHHP++pdj25at 6d9DFk2A9aalJTAag2Ee7K93wggf/bVGQZz65h1fevgPba6CojICw5Frhr8Cop+Nr3cg St2A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=AmrWR3ZZ; spf=pass (google.com: domain of ufo19890607@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=ufo19890607@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id d12-v6sor5374486plo.127.2018.07.10.03.20.00 for (Google Transport Security); Tue, 10 Jul 2018 03:20:00 -0700 (PDT) Received-SPF: pass (google.com: domain of ufo19890607@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=AmrWR3ZZ; spf=pass (google.com: domain of ufo19890607@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=ufo19890607@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=58+cdJn7QTM1mvrc2lJA0WwYS3fv8SZeDWXEzeJaI6A=; b=AmrWR3ZZAgS1yScxCjC7BjKs4jT8Of2iBzH6P2Dm+vEjCisb8e7+NZcjxYbLWhRlJF VYhgQiyf8/o0ht2gDYjwNb8vmar6deYkOSOmzk6UQWpzHhkSnAXGNEjbm7+p+/1GFu29 0xB0FLhrhCat7+IQCehMb96vS7B05C/yoa0xTCG3uAexzMHv9kxo4fiO8uca0B7GAd9v CiQoW2fO2poMsE1crKIuA186yMIR6d9GvLU7BTfXkkWk1ratuuh/6Gg+Xl8cB8NUBbhU NY3rge6K3JP71eqxQYbmrUoXc5bIhq7rQrpsOCQCAx8E6/3HrrSojDuBkannjipnKw1x NU6w== X-Google-Smtp-Source: AAOMgpdBnIH0lF5EGp7DB1UkS5IVl1DaX5YJPTqVefLsqgCJTQKRpHaWilDEcAYp+UGP2JNY6Bm2AQ== X-Received: by 2002:a17:902:683:: with SMTP id 3-v6mr24247594plh.291.1531218000239; Tue, 10 Jul 2018 03:20:00 -0700 (PDT) Received: from dest.didichuxing.com ([168.63.150.120]) by smtp.gmail.com with ESMTPSA id k4-v6sm24213518pgo.49.2018.07.10.03.19.56 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 10 Jul 2018 03:19:59 -0700 (PDT) From: ufo19890607@gmail.com To: akpm@linux-foundation.org, mhocko@suse.com, rientjes@google.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, penguin-kernel@i-love.sakura.ne.jp, guro@fb.com, yang.s@alibaba-inc.com Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, yuzhoujian@didichuxing.com Subject: [PATCH v13 1/2] Reorganize the oom report in dump_header Date: Tue, 10 Jul 2018 18:19:47 +0800 Message-Id: <1531217988-33940-1-git-send-email-ufo19890607@gmail.com> X-Mailer: git-send-email 1.8.3.1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: yuzhoujian OOM report contains several sections. The first one is the allocation context that has triggered the OOM. Then we have cpuset context followed by the stack trace of the OOM path. Followed by the oom eligible tasks and the information about the chosen oom victim. One thing that makes parsing more awkward than necessary is that we do not have a single and easily parsable line about the oom context. This patch is reorganizing the oom report to 1) who invoked oom and what was the allocation request [ 126.168182] panic invoked oom-killer: gfp_mask=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), order=0, oom_score_adj=0 2) OOM stack trace [ 126.169806] CPU: 23 PID: 8668 Comm: panic Not tainted 4.18.0-rc4+ #44 [ 126.170494] Hardware name: Inspur SA5212M4/YZMB-00370-107, BIOS 4.1.10 11/14/2016 [ 126.171197] Call Trace: [ 126.171901] dump_stack+0x5a/0x73 [ 126.172593] dump_header+0x58/0x2dc [ 126.173294] oom_kill_process+0x228/0x420 [ 126.173999] ? oom_badness+0x2a/0x130 [ 126.174705] out_of_memory+0x11a/0x4a0 [ 126.175415] __alloc_pages_slowpath+0x7cc/0xa1e [ 126.176128] ? __alloc_pages_slowpath+0x194/0xa1e [ 126.176853] ? page_counter_try_charge+0x54/0xc0 [ 126.177580] __alloc_pages_nodemask+0x277/0x290 [ 126.178319] alloc_pages_vma+0x73/0x180 [ 126.179058] do_anonymous_page+0xed/0x5a0 [ 126.179825] __handle_mm_fault+0xbb3/0xe70 [ 126.180566] handle_mm_fault+0xfa/0x210 [ 126.181313] __do_page_fault+0x233/0x4c0 [ 126.182063] do_page_fault+0x32/0x140 [ 126.182812] ? page_fault+0x8/0x30 [ 126.183560] page_fault+0x1e/0x30 3) oom context (contrains and the chosen victim). [ 126.190619] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0-1,task=panic,pid=10235,uid= 0 An admin can easily get the full oom context at a single line which makes parsing much easier. Signed-off-by: yuzhoujian Acked-by: Michal Hocko --- Changes since v12: - print the cpuset and memory allocation information after oom victim comm, pid. Changes since v11: - move the array of const char oom_constraint_text to oom_kill.c - add the cpuset information in the one line output. Changes since v10: - divide the patch v8 into two parts. One part is to add the array of const char and put enum oom_constaint into oom.h; the other adds a new func to print the missing information for the system- wide oom report. Changes since v9: - divide the patch v8 into two parts. One part is to move enum oom_constraint into memcontrol.h; the other refactors the output info in the dump_header. - replace orgin_memcg and kill_memcg with oom_memcg and task_memcg resptively. Changes since v8: - add the constraint in the oom_control structure. - put enum oom_constraint and constraint array into the oom.h file. - simplify the description for mem_cgroup_print_oom_context. Changes since v7: - add the constraint parameter to dump_header and oom_kill_process. - remove the static char array in the mem_cgroup_print_oom_context, and invoke pr_cont_cgroup_path to print memcg' name. - combine the patchset v6 into one. Changes since v6: - divide the patch v5 into two parts. One part is to add an array of const char and put enum oom_constraint into the memcontrol.h; the other refactors the output in the dump_header. - limit the memory usage for the static char array by using NAME_MAX in the mem_cgroup_print_oom_context. - eliminate the spurious spaces in the oom's output and fix the spelling of "constrain". Changes since v5: - add an array of const char for each constraint. - replace all of the pr_cont with a single line print of the pr_info. - put enum oom_constraint into the memcontrol.c file for printing oom constraint. Changes since v4: - rename the helper's name to mem_cgroup_print_oom_context. - rename the mem_cgroup_print_oom_info to mem_cgroup_print_oom_meminfo. - add the constrain info in the dump_header. Changes since v3: - rename the helper's name to mem_cgroup_print_oom_memcg_name. - add the rcu lock held to the helper. - remove the print info of memcg's name in mem_cgroup_print_oom_info. Changes since v2: - add the mem_cgroup_print_memcg_name helper to print the memcg's name which contains the task that will be killed by the oom-killer. Changes since v1: - replace adding mem_cgroup_print_oom_info with printing the memcg's name only. include/linux/oom.h | 10 ++++++++++ kernel/cgroup/cpuset.c | 4 ++-- mm/oom_kill.c | 37 +++++++++++++++++++++---------------- mm/page_alloc.c | 4 ++-- 4 files changed, 35 insertions(+), 20 deletions(-) diff --git a/include/linux/oom.h b/include/linux/oom.h index 6adac113e96d..3e5e01619bc8 100644 --- a/include/linux/oom.h +++ b/include/linux/oom.h @@ -15,6 +15,13 @@ struct notifier_block; struct mem_cgroup; struct task_struct; +enum oom_constraint { + CONSTRAINT_NONE, + CONSTRAINT_CPUSET, + CONSTRAINT_MEMORY_POLICY, + CONSTRAINT_MEMCG, +}; + /* * Details of the page allocation that triggered the oom killer that are used to * determine what should be killed. @@ -42,6 +49,9 @@ struct oom_control { unsigned long totalpages; struct task_struct *chosen; unsigned long chosen_points; + + /* Used to print the constraint info. */ + enum oom_constraint constraint; }; extern struct mutex oom_lock; diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 266f10cb7222..9510a5b32eaf 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -2666,9 +2666,9 @@ void cpuset_print_current_mems_allowed(void) rcu_read_lock(); cgrp = task_cs(current)->css.cgroup; - pr_info("%s cpuset=", current->comm); + pr_cont(",cpuset="); pr_cont_cgroup_name(cgrp); - pr_cont(" mems_allowed=%*pbl\n", + pr_cont(",mems_allowed=%*pbl", nodemask_pr_args(¤t->mems_allowed)); rcu_read_unlock(); diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 84081e77bc51..531b2c86d4db 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -237,11 +237,11 @@ unsigned long oom_badness(struct task_struct *p, struct mem_cgroup *memcg, return points > 0 ? points : 1; } -enum oom_constraint { - CONSTRAINT_NONE, - CONSTRAINT_CPUSET, - CONSTRAINT_MEMORY_POLICY, - CONSTRAINT_MEMCG, +static const char * const oom_constraint_text[] = { + [CONSTRAINT_NONE] = "CONSTRAINT_NONE", + [CONSTRAINT_CPUSET] = "CONSTRAINT_CPUSET", + [CONSTRAINT_MEMORY_POLICY] = "CONSTRAINT_MEMORY_POLICY", + [CONSTRAINT_MEMCG] = "CONSTRAINT_MEMCG", }; /* @@ -421,15 +421,21 @@ static void dump_tasks(struct mem_cgroup *memcg, const nodemask_t *nodemask) static void dump_header(struct oom_control *oc, struct task_struct *p) { - pr_warn("%s invoked oom-killer: gfp_mask=%#x(%pGg), nodemask=%*pbl, order=%d, oom_score_adj=%hd\n", - current->comm, oc->gfp_mask, &oc->gfp_mask, - nodemask_pr_args(oc->nodemask), oc->order, + pr_warn("%s invoked oom-killer: gfp_mask=%#x(%pGg), order=%d, oom_score_adj=%hd\n", + current->comm, oc->gfp_mask, &oc->gfp_mask, oc->order, current->signal->oom_score_adj); if (!IS_ENABLED(CONFIG_COMPACTION) && oc->order) pr_warn("COMPACTION is disabled!!!\n"); - cpuset_print_current_mems_allowed(); dump_stack(); + + /* one line summary of the oom killer context. */ + pr_info("oom-kill:constraint=%s,nodemask=%*pbl", + oom_constraint_text[oc->constraint], + nodemask_pr_args(oc->nodemask)); + cpuset_print_current_mems_allowed(); + pr_cont(",task=%s,pid=%5d,uid=%5d\n", p->comm, p->pid, + from_kuid(&init_user_ns, task_uid(p))); if (is_memcg_oom(oc)) mem_cgroup_print_oom_info(oc->memcg, p); else { @@ -973,8 +979,7 @@ static void oom_kill_process(struct oom_control *oc, const char *message) /* * Determines whether the kernel must panic because of the panic_on_oom sysctl. */ -static void check_panic_on_oom(struct oom_control *oc, - enum oom_constraint constraint) +static void check_panic_on_oom(struct oom_control *oc) { if (likely(!sysctl_panic_on_oom)) return; @@ -984,7 +989,7 @@ static void check_panic_on_oom(struct oom_control *oc, * does not panic for cpuset, mempolicy, or memcg allocation * failures. */ - if (constraint != CONSTRAINT_NONE) + if (oc->constraint != CONSTRAINT_NONE) return; } /* Do not panic for oom kills triggered by sysrq */ @@ -1021,8 +1026,8 @@ EXPORT_SYMBOL_GPL(unregister_oom_notifier); bool out_of_memory(struct oom_control *oc) { unsigned long freed = 0; - enum oom_constraint constraint = CONSTRAINT_NONE; + oc->constraint = CONSTRAINT_NONE; if (oom_killer_disabled) return false; @@ -1057,10 +1062,10 @@ bool out_of_memory(struct oom_control *oc) * Check if there were limitations on the allocation (only relevant for * NUMA and memcg) that may require different handling. */ - constraint = constrained_alloc(oc); - if (constraint != CONSTRAINT_MEMORY_POLICY) + oc->constraint = constrained_alloc(oc); + if (oc->constraint != CONSTRAINT_MEMORY_POLICY) oc->nodemask = NULL; - check_panic_on_oom(oc, constraint); + check_panic_on_oom(oc); if (!is_memcg_oom(oc) && sysctl_oom_kill_allocating_task && current->mm && !oom_unkillable_task(current, NULL, oc->nodemask) && diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1521100f1e63..194e0763fd5f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3416,13 +3416,13 @@ void warn_alloc(gfp_t gfp_mask, nodemask_t *nodemask, const char *fmt, ...) va_start(args, fmt); vaf.fmt = fmt; vaf.va = &args; - pr_warn("%s: %pV, mode:%#x(%pGg), nodemask=%*pbl\n", + pr_warn("%s: %pV,mode:%#x(%pGg),nodemask=%*pbl", current->comm, &vaf, gfp_mask, &gfp_mask, nodemask_pr_args(nodemask)); va_end(args); cpuset_print_current_mems_allowed(); - + pr_cont("\n"); dump_stack(); warn_alloc_show_mem(gfp_mask, nodemask); }