From patchwork Wed Mar 25 06:17:16 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 6088471 Return-Path: X-Original-To: patchwork-linux-fsdevel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 323CEBF90F for ; Wed, 25 Mar 2015 06:18:29 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 42F7B2013A for ; Wed, 25 Mar 2015 06:18:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 456DE200E8 for ; Wed, 25 Mar 2015 06:18:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752105AbbCYGSZ (ORCPT ); Wed, 25 Mar 2015 02:18:25 -0400 Received: from gum.cmpxchg.org ([85.214.110.215]:50923 "EHLO gum.cmpxchg.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752666AbbCYGRx (ORCPT ); Wed, 25 Mar 2015 02:17:53 -0400 From: Johannes Weiner To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Linus Torvalds , Andrew Morton , Tetsuo Handa , Huang Ying , Andrea Arcangeli , Dave Chinner , Michal Hocko , Theodore Ts'o Subject: [patch 12/12] mm: page_alloc: do not lock up low-order allocations upon OOM Date: Wed, 25 Mar 2015 02:17:16 -0400 Message-Id: <1427264236-17249-13-git-send-email-hannes@cmpxchg.org> X-Mailer: git-send-email 2.3.3 In-Reply-To: <1427264236-17249-1-git-send-email-hannes@cmpxchg.org> References: <1427264236-17249-1-git-send-email-hannes@cmpxchg.org> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When both page reclaim and the OOM killer fail to free memory, there are no more options for the allocator to make progress on its own. Don't risk hanging these allocations. Leave it to the allocation site to implement the fallback policy for failing allocations. Signed-off-by: Johannes Weiner --- mm/page_alloc.c | 19 ++++++------------- 1 file changed, 6 insertions(+), 13 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 9e45e97aa934..f2b1a17416c4 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2331,12 +2331,10 @@ void warn_alloc_failed(gfp_t gfp_mask, int order, const char *fmt, ...) static inline struct page * __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order, int alloc_flags, - const struct alloc_context *ac, unsigned long *did_some_progress) + const struct alloc_context *ac) { struct page *page = NULL; - *did_some_progress = 0; - /* * This allocating task can become the OOM victim itself at * any point before acquiring the lock. In that case, exit @@ -2376,13 +2374,9 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order, int alloc_flags, goto out; } - if (out_of_memory(ac->zonelist, gfp_mask, order, ac->nodemask, false)) { - *did_some_progress = 1; - } else { + if (!out_of_memory(ac->zonelist, gfp_mask, order, ac->nodemask, false)) /* Oops, these shouldn't happen with the OOM killer disabled */ - if (WARN_ON_ONCE(gfp_mask & __GFP_NOFAIL)) - *did_some_progress = 1; - } + WARN_ON_ONCE(gfp_mask & __GFP_NOFAIL); /* * Allocate from the OOM killer reserves. @@ -2799,13 +2793,12 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, } /* Reclaim has failed us, start killing things */ - page = __alloc_pages_may_oom(gfp_mask, order, alloc_flags, ac, - &did_some_progress); + page = __alloc_pages_may_oom(gfp_mask, order, alloc_flags, ac); if (page) goto got_pg; - /* Retry as long as the OOM killer is making progress */ - if (did_some_progress) + /* Wait for user to order more dimms, cuz these are done */ + if (gfp_mask & __GFP_NOFAIL) goto retry; noretry: