From patchwork Mon Oct 25 15:02:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Hocko X-Patchwork-Id: 12582069 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD6A7C43217 for ; Mon, 25 Oct 2021 15:03:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9280D60F92 for ; Mon, 25 Oct 2021 15:03:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 9280D60F92 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 2F3AC94000A; Mon, 25 Oct 2021 11:03:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2A2AD940007; Mon, 25 Oct 2021 11:03:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 16ABA94000A; Mon, 25 Oct 2021 11:03:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0237.hostedemail.com [216.40.44.237]) by kanga.kvack.org (Postfix) with ESMTP id 09762940007 for ; Mon, 25 Oct 2021 11:03:53 -0400 (EDT) Received: from smtpin33.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id BAE8A3262B for ; Mon, 25 Oct 2021 15:03:52 +0000 (UTC) X-FDA: 78735279504.33.B64E9BF Received: from mail-ed1-f51.google.com (mail-ed1-f51.google.com [209.85.208.51]) by imf07.hostedemail.com (Postfix) with ESMTP id 628471000098 for ; Mon, 25 Oct 2021 15:03:52 +0000 (UTC) Received: by mail-ed1-f51.google.com with SMTP id u13so657631edy.10 for ; Mon, 25 Oct 2021 08:03:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=oWosBj9OWZSkL3mtoiOjTWkn1t44VoNIBaiKKWhE4fA=; b=1NUkHRVvqjxieZLE6vg+02fh1CzgtGjxBub5j8UzGlh3Wd1nTok7amb0naHI049vLa sH/5CoiHGmFH4JfN/4d0dSISBTRTM89nxg2NMMD0GCiEbV6jPxCPX9iSPWfUR+mh8Tt3 QO1lfkNxon6lq2U9e4/NYx5BGE5HQc9QxKQWDcLlNut3okDscZ5uiYxNV+edaBtOcKp+ nL4bcc1GVz3cdDJ4HkLqnErLp/r+UtlgrZlWe6Cft5fCoWP9p2WN06CPeWV0eN1lHI/Q m5run114K0ZW50k37P/WnLKgIy6QrTjAlyu3mxBG+HCGCen7SdUEm4s/Bp4oQ6ZWHGYA Idvg== X-Gm-Message-State: AOAM533o73j1XNniubXlBJhHTdnEhdJ/B8SeLxe42VjWkeT/M3GGmLHF Czfsl8WADwiWkcRvttF57G1crgSQ77o= X-Google-Smtp-Source: ABdhPJyrqaKxrOj3aYHQIaLJBPjOPC0pTflJSf1TlN/flrqHe8ZJhLlPFUuDDm5YEqSqQrVQqs1BmQ== X-Received: by 2002:a17:907:da3:: with SMTP id go35mr913924ejc.556.1635174158487; Mon, 25 Oct 2021 08:02:38 -0700 (PDT) Received: from localhost.localdomain (ip-85-160-34-175.eurotel.cz. [85.160.34.175]) by smtp.gmail.com with ESMTPSA id u23sm9098221edr.97.2021.10.25.08.02.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Oct 2021 08:02:37 -0700 (PDT) From: Michal Hocko To: Cc: Dave Chinner , Neil Brown , Andrew Morton , Christoph Hellwig , Uladzislau Rezki , , LKML , Ilya Dryomov , Jeff Layton , Michal Hocko Subject: [PATCH 1/4] mm/vmalloc: alloc GFP_NO{FS,IO} for vmalloc Date: Mon, 25 Oct 2021 17:02:20 +0200 Message-Id: <20211025150223.13621-2-mhocko@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211025150223.13621-1-mhocko@kernel.org> References: <20211025150223.13621-1-mhocko@kernel.org> MIME-Version: 1.0 X-Stat-Signature: m3a91nqmm68izrirnhgk8d3xu8nmk4jj Authentication-Results: imf07.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none); spf=pass (imf07.hostedemail.com: domain of mstsxfx@gmail.com designates 209.85.208.51 as permitted sender) smtp.mailfrom=mstsxfx@gmail.com X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 628471000098 X-HE-Tag: 1635174232-462489 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Michal Hocko vmalloc historically hasn't supported GFP_NO{FS,IO} requests because page table allocations do not support externally provided gfp mask and performed GFP_KERNEL like allocations. Since few years we have scope (memalloc_no{fs,io}_{save,restore}) APIs to enforce NOFS and NOIO constrains implicitly to all allocators within the scope. There was a hope that those scopes would be defined on a higher level when the reclaim recursion boundary starts/stops (e.g. when a lock required during the memory reclaim is required etc.). It seems that not all NOFS/NOIO users have adopted this approach and instead they have taken a workaround approach to wrap a single [k]vmalloc allocation by a scope API. These workarounds do not serve the purpose of a better reclaim recursion documentation and reduction of explicit GFP_NO{FS,IO} usege so let's just provide them with the semantic they are asking for without a need for workarounds. Add support for GFP_NOFS and GFP_NOIO to vmalloc directly. All internal allocations already comply with the given gfp_mask. The only current exception is vmap_pages_range which maps kernel page tables. Infer the proper scope API based on the given gfp mask. Signed-off-by: Michal Hocko --- mm/vmalloc.c | 22 ++++++++++++++++++++-- 1 file changed, 20 insertions(+), 2 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index d77830ff604c..c6cc77d2f366 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2889,6 +2889,8 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, unsigned long array_size; unsigned int nr_small_pages = size >> PAGE_SHIFT; unsigned int page_order; + unsigned int flags; + int ret; array_size = (unsigned long)nr_small_pages * sizeof(struct page *); gfp_mask |= __GFP_NOWARN; @@ -2930,8 +2932,24 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, goto fail; } - if (vmap_pages_range(addr, addr + size, prot, area->pages, - page_shift) < 0) { + /* + * page tables allocations ignore external gfp mask, enforce it + * by the scope API + */ + if ((gfp_mask & (__GFP_FS | __GFP_IO)) == __GFP_IO) + flags = memalloc_nofs_save(); + else if ((gfp_mask & (__GFP_FS | __GFP_IO)) == 0) + flags = memalloc_noio_save(); + + ret = vmap_pages_range(addr, addr + size, prot, area->pages, + page_shift); + + if ((gfp_mask & (__GFP_FS | __GFP_IO)) == __GFP_IO) + memalloc_nofs_restore(flags); + else if ((gfp_mask & (__GFP_FS | __GFP_IO)) == 0) + memalloc_noio_restore(flags); + + if (ret < 0) { warn_alloc(gfp_mask, NULL, "vmalloc error: size %lu, failed to map pages", area->nr_pages * PAGE_SIZE);