From patchwork Mon Jun 15 20:35:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 11605705 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4EC7414E3 for ; Mon, 15 Jun 2020 20:35:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1AC5D20578 for ; Mon, 15 Jun 2020 20:35:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Mz60ktr4" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1AC5D20578 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 237DB6B0002; Mon, 15 Jun 2020 16:35:13 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 20E0C6B0003; Mon, 15 Jun 2020 16:35:13 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 122F26B0005; Mon, 15 Jun 2020 16:35:13 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0102.hostedemail.com [216.40.44.102]) by kanga.kvack.org (Postfix) with ESMTP id EEE8A6B0002 for ; Mon, 15 Jun 2020 16:35:12 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 442841215A5 for ; Mon, 15 Jun 2020 20:35:12 +0000 (UTC) X-FDA: 76932600864.08.stop17_1d00dbc26df9 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id 1576F1815F569 for ; Mon, 15 Jun 2020 20:35:12 +0000 (UTC) X-Spam-Summary: 2,0,0,813ac3038d8bdd8c,d41d8cd98f00b204,hughd@google.com,,RULES_HIT:41:355:379:800:960:967:973:988:989:1260:1277:1313:1314:1345:1437:1516:1518:1535:1542:1593:1594:1711:1730:1747:1777:1792:2198:2199:2393:2525:2553:2559:2563:2682:2685:2731:2859:2899:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3152:3353:3865:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4321:5007:6119:6261:6653:7875:7903:8660:8957:9025:9592:10004:10400:10450:10455:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12698:12737:12740:12806:12895:12986:13148:13221:13229:13230:13439:14096:14097:14181:14394:14659:14721:19904:19999:21080:21444:21451:21611:21627:21796:21939:30036:30054:30056:30090,0,RBL:209.85.222.194:@google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:22,LUA_SUMMARY:none X-HE-Tag: stop17_1d00dbc26df9 X-Filterd-Recvd-Size: 5915 Received: from mail-qk1-f194.google.com (mail-qk1-f194.google.com [209.85.222.194]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Mon, 15 Jun 2020 20:35:11 +0000 (UTC) Received: by mail-qk1-f194.google.com with SMTP id g28so17139934qkl.0 for ; Mon, 15 Jun 2020 13:35:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:user-agent:mime-version; bh=HZ8K7URNvKTh5VfmftI4rTjdEXOeqMazCq/v0kIPvhg=; b=Mz60ktr4m59DsCHn4yGOgdBUXxTE6mqPfXRjoPUdnOAzI9IVn8ALb5MZ0OjGkVJrw1 UA6hrJfIC18ZJr4k3AUsQbsbB1+kLCXZoYenRQ2B0BSSt86x6Ie5ETGceGNQ7hDsy4EB /5PMQcAzdOF27FOuD9jNQlP5InBIlzNLViK1VZWU4xYJG0+N28Kj7K0P2uMPsC8Fjtox Y3P77lBxt+yVyNMF83ZY55IB12guK3Y1l0xF0Top7V6tOM8goUbCS2/lyBTByRYm+Wqu eMIuPBf67owBk4wtXCYn6WWyUaEq1pU3REy1lThZiu/8lqTse9ICp4QpL0d0i2C7mFOD bMfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:user-agent :mime-version; bh=HZ8K7URNvKTh5VfmftI4rTjdEXOeqMazCq/v0kIPvhg=; b=CscTegEqxCe1LgwvgYUmZxQp/5aS+afCkn4abQ9yR7dmFVkh8IFT9ZTFVw4E8Ypb8h 2/CLQbpEgxcEcGIOxBz4G18jINXvfM7r5/x6IEmygvndpjxoHiZBy/5N/w1gOLUHvNBO IJ4bmK6ZJnE5YWTTw77OraY6clkgx7B0CCYzAQTrBacKLMOU+f06EGpjagrKTUaHg17X vdlTrLjsft1rO/e3hjuo++hGN3dVCNS4HNaqEf59FaspMxQFoh31/XJBn14zK+PtEheG IkZLPEbAhFEXorhmQd1ziYbrlhF8qjcrEYXzqgnDLIOgheudCptVQ1mX93nrUcy+UA6B ao1Q== X-Gm-Message-State: AOAM531nsGptqHbnRoPGDmb7Su0olSIvPJleujF0obV8OneJK9hhDgdn W7zg6MHrSQlfgV3T3mZWWtQaWg== X-Google-Smtp-Source: ABdhPJyOsoc0+w/eGDzzeGQ9ebm1eS/KiASWdumVDKuIiEcOZHybjs3lr5GIltRd5QQwtDqaaMclyA== X-Received: by 2002:a37:aa03:: with SMTP id t3mr16980023qke.414.1592253310471; Mon, 15 Jun 2020 13:35:10 -0700 (PDT) Received: from eggly.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id p185sm11291699qkd.128.2020.06.15.13.35.08 (version=TLS1 cipher=ECDHE-ECDSA-AES128-SHA bits=128/128); Mon, 15 Jun 2020 13:35:09 -0700 (PDT) Date: Mon, 15 Jun 2020 13:35:07 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@eggly.anvils To: Andrew Morton cc: Chris Murphy , Matthew Wilcox , Vlastimil Babka , Linux Memory Management List , Chris Wilson Subject: [PATCH] mm: fix swap cache node allocation mask Message-ID: User-Agent: Alpine 2.11 (LSU 23 2013-08-11) MIME-Version: 1.0 X-Rspamd-Queue-Id: 1576F1815F569 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: https://bugzilla.kernel.org/show_bug.cgi?id=208085 reports that a slightly overcommitted load, testing swap and zram along with i915, splats and keeps on splatting, when it had better fail less noisily: gnome-shell: page allocation failure: order:0, mode:0x400d0(__GFP_IO|__GFP_FS|__GFP_COMP|__GFP_RECLAIMABLE), nodemask=(null),cpuset=/,mems_allowed=0 CPU: 2 PID: 1155 Comm: gnome-shell Not tainted 5.7.0-1.fc33.x86_64 #1 Call Trace: dump_stack+0x64/0x88 warn_alloc.cold+0x75/0xd9 __alloc_pages_slowpath.constprop.0+0xcfa/0xd30 __alloc_pages_nodemask+0x2df/0x320 alloc_slab_page+0x195/0x310 allocate_slab+0x3c5/0x440 ___slab_alloc+0x40c/0x5f0 __slab_alloc+0x1c/0x30 kmem_cache_alloc+0x20e/0x220 xas_nomem+0x28/0x70 add_to_swap_cache+0x321/0x400 __read_swap_cache_async+0x105/0x240 swap_cluster_readahead+0x22c/0x2e0 shmem_swapin+0x8e/0xc0 shmem_swapin_page+0x196/0x740 shmem_getpage_gfp+0x3a2/0xa60 shmem_read_mapping_page_gfp+0x32/0x60 shmem_get_pages+0x155/0x5e0 [i915] __i915_gem_object_get_pages+0x68/0xa0 [i915] i915_vma_pin+0x3fe/0x6c0 [i915] eb_add_vma+0x10b/0x2c0 [i915] i915_gem_do_execbuffer+0x704/0x3430 [i915] i915_gem_execbuffer2_ioctl+0x1ea/0x3e0 [i915] drm_ioctl_kernel+0x86/0xd0 [drm] drm_ioctl+0x206/0x390 [drm] ksys_ioctl+0x82/0xc0 __x64_sys_ioctl+0x16/0x20 do_syscall_64+0x5b/0xf0 entry_SYSCALL_64_after_hwframe+0x44/0xa9 Reported on 5.7, but it goes back really to 3.1: when shmem_read_mapping_page_gfp() was implemented for use by i915, and allowed for __GFP_NORETRY and __GFP_NOWARN flags in most places, but missed swapin's "& GFP_KERNEL" mask for page tree node allocation in __read_swap_cache_async() - that was to mask off HIGHUSER_MOVABLE bits from what page cache uses, but GFP_RECLAIM_MASK is now what's needed. Fixes: 68da9f055755 ("tmpfs: pass gfp to shmem_getpage_gfp") Reported-by: Chris Murphy Analyzed-by: Vlastimil Babka Analyzed-by: Matthew Wilcox Tested-by: Chris Murphy Signed-off-by: Hugh Dickins Reviewed-by: Matthew Wilcox (Oracle) Cc: stable@vger.kernel.org # 3.1+ Reviewed-by: Vlastimil Babka --- mm/swap_state.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --- 5.8-rc1/mm/swap_state.c 2020-06-14 15:13:01.518042420 -0700 +++ linux/mm/swap_state.c 2020-06-15 11:48:02.346691901 -0700 @@ -21,7 +21,7 @@ #include #include #include - +#include "internal.h" /* * swapper_space is a fiction, retained to simplify the path through @@ -429,7 +429,7 @@ struct page *__read_swap_cache_async(swp __SetPageSwapBacked(page); /* May fail (-ENOMEM) if XArray node allocation failed. */ - if (add_to_swap_cache(page, entry, gfp_mask & GFP_KERNEL)) { + if (add_to_swap_cache(page, entry, gfp_mask & GFP_RECLAIM_MASK)) { put_swap_page(page, entry); goto fail_unlock; }