From patchwork Mon Oct 18 11:47:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Hocko X-Patchwork-Id: 12566107 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B73C9C4321E for ; Mon, 18 Oct 2021 11:47:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4173861355 for ; Mon, 18 Oct 2021 11:47:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 4173861355 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 963486B006C; Mon, 18 Oct 2021 07:47:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8EAD36B0071; Mon, 18 Oct 2021 07:47:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 73D936B0072; Mon, 18 Oct 2021 07:47:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0121.hostedemail.com [216.40.44.121]) by kanga.kvack.org (Postfix) with ESMTP id 5D25A6B006C for ; Mon, 18 Oct 2021 07:47:29 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 156D018026105 for ; Mon, 18 Oct 2021 11:47:29 +0000 (UTC) X-FDA: 78709383018.27.8FC36EC Received: from mail-ed1-f46.google.com (mail-ed1-f46.google.com [209.85.208.46]) by imf02.hostedemail.com (Postfix) with ESMTP id D8D567001A0A for ; Mon, 18 Oct 2021 11:47:26 +0000 (UTC) Received: by mail-ed1-f46.google.com with SMTP id 5so39398266edw.7 for ; Mon, 18 Oct 2021 04:47:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=u0T7gxYUX1IZe+6zZc2VL/ygWEQ9BUReTwyQ18vECfw=; b=XZUmbzUWHXP86VAneWrYm11uKnUig82WvVWobJy5U4AYapPbKCoea1NKjGWsyO0ZvS kN/X3nyRSsdCUSPmPt96/Jep7Noy7mo8C9ECmaRrhYlNSU3jF6Hms+2grigUlPG5QRlW HM8tibG1nyPeh/iJmxlCVy7QyjExd0KYP5BQM+/aQH3n4zLTPdegprvcZBLgZOv1Du8c sVuUlnYcb2j/kET+8RhwjBYxuMUUW/rF2JXviRcFk22eP37a+gH0BiZhLvirSz9WFOWp 6d6FW7OYXmIABLKddaSzfYQqv2WedMt9QHRT0HB76kT2Nqbfdh4plwDqT+g42efBlVa3 fL5A== X-Gm-Message-State: AOAM533HyeMaoXWeKEwJP0eLYHGxLV8SIXewlVrg34CObp91AVDlCdIz Mk/N1/wTDOso40aVcaUVgLpuLgr5PvQ= X-Google-Smtp-Source: ABdhPJzA3Yw8UY1rj4xrYibdkMXEXhftxv4Gy8AnglURF2yg0TH0pLfxdQCB3cT2eLr75LPxwjK2Xg== X-Received: by 2002:a17:906:7017:: with SMTP id n23mr29088204ejj.446.1634557647343; Mon, 18 Oct 2021 04:47:27 -0700 (PDT) Received: from localhost.localdomain (ip-85-160-35-99.eurotel.cz. [85.160.35.99]) by smtp.gmail.com with ESMTPSA id b2sm9587458edv.73.2021.10.18.04.47.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Oct 2021 04:47:26 -0700 (PDT) From: Michal Hocko To: Cc: Dave Chinner , Neil Brown , Andrew Morton , Christoph Hellwig , Uladzislau Rezki , , LKML , Ilya Dryomov , Jeff Layton , Michal Hocko Subject: [RFC 1/3] mm/vmalloc: alloc GFP_NO{FS,IO} for vmalloc Date: Mon, 18 Oct 2021 13:47:10 +0200 Message-Id: <20211018114712.9802-2-mhocko@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211018114712.9802-1-mhocko@kernel.org> References: <20211018114712.9802-1-mhocko@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: D8D567001A0A X-Stat-Signature: hzkx4hiu7kz73rri3wm1y5i8fwhmsyg9 Authentication-Results: imf02.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none); spf=pass (imf02.hostedemail.com: domain of mstsxfx@gmail.com designates 209.85.208.46 as permitted sender) smtp.mailfrom=mstsxfx@gmail.com X-HE-Tag: 1634557646-951158 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Michal Hocko vmalloc historically hasn't supported GFP_NO{FS,IO} requests because page table allocations do not support externally provided gfp mask and performed GFP_KERNEL like allocations. Since few years we have scope (memalloc_no{fs,io}_{save,restore}) APIs to enforce NOFS and NOIO constrains implicitly to all allocators within the scope. There was a hope that those scopes would be defined on a higher level when the reclaim recursion boundary starts/stops (e.g. when a lock required during the memory reclaim is required etc.). It seems that not all NOFS/NOIO users have adopted this approach and instead they have taken a workaround approach to wrap a single [k]vmalloc allocation by a scope API. These workarounds do not serve the purpose of a better reclaim recursion documentation and reduction of explicit GFP_NO{FS,IO} usege so let's just provide them with the semantic they are asking for without a need for workarounds. Add support for GFP_NOFS and GFP_NOIO to vmalloc directly. All internal allocations already comply with the given gfp_mask. The only current exception is vmap_pages_range which maps kernel page tables. Infer the proper scope API based on the given gfp mask. Signed-off-by: Michal Hocko --- mm/vmalloc.c | 22 ++++++++++++++++++++-- 1 file changed, 20 insertions(+), 2 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index d77830ff604c..7455c89598d3 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2889,6 +2889,8 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, unsigned long array_size; unsigned int nr_small_pages = size >> PAGE_SHIFT; unsigned int page_order; + unsigned int flags; + int ret; array_size = (unsigned long)nr_small_pages * sizeof(struct page *); gfp_mask |= __GFP_NOWARN; @@ -2930,8 +2932,24 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, goto fail; } - if (vmap_pages_range(addr, addr + size, prot, area->pages, - page_shift) < 0) { + /* + * page tables allocations ignore external gfp mask, enforce it + * by the scope API + */ + if ((gfp_mask & (__GFP_FS | __GFP_IO)) == __GFP_IO) + flags = memalloc_nofs_save(); + else if (!(gfp_mask & (__GFP_FS | __GFP_IO))) + flags = memalloc_noio_save(); + + ret = vmap_pages_range(addr, addr + size, prot, area->pages, + page_shift); + + if ((gfp_mask & (__GFP_FS | __GFP_IO)) == __GFP_IO) + memalloc_nofs_restore(flags); + else if (!(gfp_mask & (__GFP_FS | __GFP_IO))) + memalloc_noio_restore(flags); + + if (ret < 0) { warn_alloc(gfp_mask, NULL, "vmalloc error: size %lu, failed to map pages", area->nr_pages * PAGE_SIZE); From patchwork Mon Oct 18 11:47:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Hocko X-Patchwork-Id: 12566113 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D84AC433F5 for ; Mon, 18 Oct 2021 11:47:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DE03A60EE5 for ; Mon, 18 Oct 2021 11:47:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org DE03A60EE5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id F35C66B0073; Mon, 18 Oct 2021 07:47:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E8E346B0074; Mon, 18 Oct 2021 07:47:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B8522900002; Mon, 18 Oct 2021 07:47:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0163.hostedemail.com [216.40.44.163]) by kanga.kvack.org (Postfix) with ESMTP id 8F4686B0072 for ; Mon, 18 Oct 2021 07:47:31 -0400 (EDT) Received: from smtpin33.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 47ADE18397F3A for ; Mon, 18 Oct 2021 11:47:31 +0000 (UTC) X-FDA: 78709383102.33.FC4A917 Received: from mail-ed1-f46.google.com (mail-ed1-f46.google.com [209.85.208.46]) by imf12.hostedemail.com (Postfix) with ESMTP id E89A910000AF for ; Mon, 18 Oct 2021 11:47:30 +0000 (UTC) Received: by mail-ed1-f46.google.com with SMTP id g10so70111049edj.1 for ; Mon, 18 Oct 2021 04:47:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4L8lFLAdXhWJMY8TqNwD1OY6QC5E8vHIjyqLADqFDA4=; b=bqqpGZLZNGklQUrnvA7wx2t4Sxp1Yyce64avWYYx48+AzeCWc7wjtmwhRXnlP5pTSt VAh+a6ETFLLK0VZRMTVA66E3/iZK8pfXySdswd8c9y+gTPtlusjfN/u0m0+nEgqkLpCo O4SN3+Trj+Jf7jQd6MbRqlWIdYmaSRmGkM/uc2OjMp+C31n3MFpVdDKrHFazzZvYMZVR noCCazthH7fVPP+n7xV8QDTDHkQRkBoP57mJINteHsLgXe94PeXyEhtKs5vAKLHTRgj0 9Q4esM9rpIKUV2J9LMBTFXAJSr3DoOfzh7kMRJSmMox4XzoD2GM5qOplZV1c9lsfFwwr pr3g== X-Gm-Message-State: AOAM533k/CTNgkk3ihFA2s21dlk0HJkFBQNdVH1iP70t+uoMRlJMnV60 LJ8YVLzMd4Lmtt683y4SNmnfn/Cd684= X-Google-Smtp-Source: ABdhPJyBgibGZUkyEokdtRYfvSvKNqevOAdvgE5oWJNGtKyZ5EwnZkfw2zWnIDru9DVVWBa+3kOs3g== X-Received: by 2002:a17:906:3510:: with SMTP id r16mr30003410eja.209.1634557648625; Mon, 18 Oct 2021 04:47:28 -0700 (PDT) Received: from localhost.localdomain (ip-85-160-35-99.eurotel.cz. [85.160.35.99]) by smtp.gmail.com with ESMTPSA id b2sm9587458edv.73.2021.10.18.04.47.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Oct 2021 04:47:28 -0700 (PDT) From: Michal Hocko To: Cc: Dave Chinner , Neil Brown , Andrew Morton , Christoph Hellwig , Uladzislau Rezki , , LKML , Ilya Dryomov , Jeff Layton , Michal Hocko Subject: [RFC 2/3] mm/vmalloc: add support for __GFP_NOFAIL Date: Mon, 18 Oct 2021 13:47:11 +0200 Message-Id: <20211018114712.9802-3-mhocko@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211018114712.9802-1-mhocko@kernel.org> References: <20211018114712.9802-1-mhocko@kernel.org> MIME-Version: 1.0 Authentication-Results: imf12.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none); spf=pass (imf12.hostedemail.com: domain of mstsxfx@gmail.com designates 209.85.208.46 as permitted sender) smtp.mailfrom=mstsxfx@gmail.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: E89A910000AF X-Stat-Signature: 9eoaricz6146c5qsd87d5bk8h7tcc3qb X-HE-Tag: 1634557650-28554 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Michal Hocko Dave Chinner has mentioned that some of the xfs code would benefit from kvmalloc support for __GFP_NOFAIL because they have allocations that cannot fail and they do not fit into a single page. The larg part of the vmalloc implementation already complies with the given gfp flags so there is no work for those to be done. The area and page table allocations are an exception to that. Implement a retry loop for those. Signed-off-by: Michal Hocko --- mm/vmalloc.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 7455c89598d3..3a5a178295d1 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2941,8 +2941,10 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, else if (!(gfp_mask & (__GFP_FS | __GFP_IO))) flags = memalloc_noio_save(); - ret = vmap_pages_range(addr, addr + size, prot, area->pages, + do { + ret = vmap_pages_range(addr, addr + size, prot, area->pages, page_shift); + } while ((gfp_mask & __GFP_NOFAIL) && (ret < 0)); if ((gfp_mask & (__GFP_FS | __GFP_IO)) == __GFP_IO) memalloc_nofs_restore(flags); @@ -3032,6 +3034,8 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align, warn_alloc(gfp_mask, NULL, "vmalloc error: size %lu, vm_struct allocation failed", real_size); + if (gfp_mask && __GFP_NOFAIL) + goto again; goto fail; } From patchwork Mon Oct 18 11:47:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Hocko X-Patchwork-Id: 12566111 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51539C433F5 for ; Mon, 18 Oct 2021 11:47:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 00D0960EE5 for ; Mon, 18 Oct 2021 11:47:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 00D0960EE5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id C96556B0072; Mon, 18 Oct 2021 07:47:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BF8E1900003; Mon, 18 Oct 2021 07:47:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 827D56B0074; Mon, 18 Oct 2021 07:47:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0048.hostedemail.com [216.40.44.48]) by kanga.kvack.org (Postfix) with ESMTP id 6F2456B0072 for ; Mon, 18 Oct 2021 07:47:31 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 2D5418249980 for ; Mon, 18 Oct 2021 11:47:31 +0000 (UTC) X-FDA: 78709383102.25.BEC0E32 Received: from mail-ed1-f42.google.com (mail-ed1-f42.google.com [209.85.208.42]) by imf06.hostedemail.com (Postfix) with ESMTP id 24417801A89F for ; Mon, 18 Oct 2021 11:47:30 +0000 (UTC) Received: by mail-ed1-f42.google.com with SMTP id y12so71692718eda.4 for ; Mon, 18 Oct 2021 04:47:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GQ9vGBplovxkjkXfQYlY3shu8H3p0LkLkA5L8hbsn4g=; b=1uIepKtMz3dmXeQ7oN4JV58Tc4fnjMX3OJngqluLeSBvXlt6kXkUpYyHRnv7KXrG7d EefZMIDO8aE9m+0pMbK3e6ptO90xwdUMCbvf4ee2GnAjvT5hIr2DblN+112ACcdJHLLh FmRWuwBNKF/WUtaLq+zqoLlxaOo1zUHIoZlIyng/zrazpnz7xkZTmvxAOpFFhFamM9JZ jO2RZpxsHmRSj1zwxjXOel0jM486OccPXN1lY2G7wsv/b+wK+upz63AW5KcR/PIZVZ20 huJA9hTy/f2eqoM8htHosjIc00/FXVFWt9wkklpv51t0PRVzTAylNTyZQ/kCpcuxQ59S LQTg== X-Gm-Message-State: AOAM533wfVD1wEJiMxImxBwc0xnvX/XyqXFES0Z4G6LWuJkwmawXZdEd QHQeqWcMxakbRjmf6zSRPXVLDkgzaYM= X-Google-Smtp-Source: ABdhPJxH7akfI7MYypqxGlSyOwxwmopqyA/cOk5HYHG7CD5PbWSL9XfZKK6k+XVi75CQKzU+7Lvfjw== X-Received: by 2002:a17:906:3c51:: with SMTP id i17mr29943628ejg.86.1634557649682; Mon, 18 Oct 2021 04:47:29 -0700 (PDT) Received: from localhost.localdomain (ip-85-160-35-99.eurotel.cz. [85.160.35.99]) by smtp.gmail.com with ESMTPSA id b2sm9587458edv.73.2021.10.18.04.47.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Oct 2021 04:47:29 -0700 (PDT) From: Michal Hocko To: Cc: Dave Chinner , Neil Brown , Andrew Morton , Christoph Hellwig , Uladzislau Rezki , , LKML , Ilya Dryomov , Jeff Layton , Michal Hocko Subject: [RFC 3/3] mm: allow !GFP_KERNEL allocations for kvmalloc Date: Mon, 18 Oct 2021 13:47:12 +0200 Message-Id: <20211018114712.9802-4-mhocko@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211018114712.9802-1-mhocko@kernel.org> References: <20211018114712.9802-1-mhocko@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 24417801A89F X-Stat-Signature: ih16oab7diaanhm33rowhhs4tfpuytpp Authentication-Results: imf06.hostedemail.com; dkim=none; spf=pass (imf06.hostedemail.com: domain of mstsxfx@gmail.com designates 209.85.208.42 as permitted sender) smtp.mailfrom=mstsxfx@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none) X-Rspamd-Server: rspam02 X-HE-Tag: 1634557650-47631 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Michal Hocko A support for GFP_NO{FS,IO} and __GFP_NOFAIL has been implemented by previous patches so we can allow the support for kvmalloc. This will allow some external users to simplify or completely remove their helpers. GFP_NOWAIT semantic hasn't been supported so far but it hasn't been explicitly documented so let's add a note about that. ceph_kvmalloc is the first helper to be dropped and changed to kvmalloc. Signed-off-by: Michal Hocko --- include/linux/ceph/libceph.h | 1 - mm/util.c | 15 ++++----------- net/ceph/buffer.c | 4 ++-- net/ceph/ceph_common.c | 27 --------------------------- net/ceph/crypto.c | 2 +- net/ceph/messenger.c | 2 +- net/ceph/messenger_v2.c | 2 +- net/ceph/osdmap.c | 12 ++++++------ 8 files changed, 15 insertions(+), 50 deletions(-) diff --git a/include/linux/ceph/libceph.h b/include/linux/ceph/libceph.h index 409d8c29bc4f..309acbcb5a8a 100644 --- a/include/linux/ceph/libceph.h +++ b/include/linux/ceph/libceph.h @@ -295,7 +295,6 @@ extern bool libceph_compatible(void *data); extern const char *ceph_msg_type_name(int type); extern int ceph_check_fsid(struct ceph_client *client, struct ceph_fsid *fsid); -extern void *ceph_kvmalloc(size_t size, gfp_t flags); struct fs_parameter; struct fc_log; diff --git a/mm/util.c b/mm/util.c index bacabe446906..fdec6b4b1267 100644 --- a/mm/util.c +++ b/mm/util.c @@ -549,13 +549,10 @@ EXPORT_SYMBOL(vm_mmap); * Uses kmalloc to get the memory but if the allocation fails then falls back * to the vmalloc allocator. Use kvfree for freeing the memory. * - * Reclaim modifiers - __GFP_NORETRY and __GFP_NOFAIL are not supported. + * Reclaim modifiers - __GFP_NORETRY and GFP_NOWAIT are not supported. * __GFP_RETRY_MAYFAIL is supported, and it should be used only if kmalloc is * preferable to the vmalloc fallback, due to visible performance drawbacks. * - * Please note that any use of gfp flags outside of GFP_KERNEL is careful to not - * fall back to vmalloc. - * * Return: pointer to the allocated memory of %NULL in case of failure */ void *kvmalloc_node(size_t size, gfp_t flags, int node) @@ -563,13 +560,6 @@ void *kvmalloc_node(size_t size, gfp_t flags, int node) gfp_t kmalloc_flags = flags; void *ret; - /* - * vmalloc uses GFP_KERNEL for some internal allocations (e.g page tables) - * so the given set of flags has to be compatible. - */ - if ((flags & GFP_KERNEL) != GFP_KERNEL) - return kmalloc_node(size, flags, node); - /* * We want to attempt a large physically contiguous block first because * it is less likely to fragment multiple larger blocks and therefore @@ -582,6 +572,9 @@ void *kvmalloc_node(size_t size, gfp_t flags, int node) if (!(kmalloc_flags & __GFP_RETRY_MAYFAIL)) kmalloc_flags |= __GFP_NORETRY; + + /* nofail semantic is implemented by the vmalloc fallback */ + kmalloc_flags &= ~__GFP_NOFAIL; } ret = kmalloc_node(size, kmalloc_flags, node); diff --git a/net/ceph/buffer.c b/net/ceph/buffer.c index 5622763ad402..7e51f128045d 100644 --- a/net/ceph/buffer.c +++ b/net/ceph/buffer.c @@ -7,7 +7,7 @@ #include #include -#include /* for ceph_kvmalloc */ +#include /* for kvmalloc */ struct ceph_buffer *ceph_buffer_new(size_t len, gfp_t gfp) { @@ -17,7 +17,7 @@ struct ceph_buffer *ceph_buffer_new(size_t len, gfp_t gfp) if (!b) return NULL; - b->vec.iov_base = ceph_kvmalloc(len, gfp); + b->vec.iov_base = kvmalloc(len, gfp); if (!b->vec.iov_base) { kfree(b); return NULL; diff --git a/net/ceph/ceph_common.c b/net/ceph/ceph_common.c index 97d6ea763e32..9441b4a4912b 100644 --- a/net/ceph/ceph_common.c +++ b/net/ceph/ceph_common.c @@ -190,33 +190,6 @@ int ceph_compare_options(struct ceph_options *new_opt, } EXPORT_SYMBOL(ceph_compare_options); -/* - * kvmalloc() doesn't fall back to the vmalloc allocator unless flags are - * compatible with (a superset of) GFP_KERNEL. This is because while the - * actual pages are allocated with the specified flags, the page table pages - * are always allocated with GFP_KERNEL. - * - * ceph_kvmalloc() may be called with GFP_KERNEL, GFP_NOFS or GFP_NOIO. - */ -void *ceph_kvmalloc(size_t size, gfp_t flags) -{ - void *p; - - if ((flags & (__GFP_IO | __GFP_FS)) == (__GFP_IO | __GFP_FS)) { - p = kvmalloc(size, flags); - } else if ((flags & (__GFP_IO | __GFP_FS)) == __GFP_IO) { - unsigned int nofs_flag = memalloc_nofs_save(); - p = kvmalloc(size, GFP_KERNEL); - memalloc_nofs_restore(nofs_flag); - } else { - unsigned int noio_flag = memalloc_noio_save(); - p = kvmalloc(size, GFP_KERNEL); - memalloc_noio_restore(noio_flag); - } - - return p; -} - static int parse_fsid(const char *str, struct ceph_fsid *fsid) { int i = 0; diff --git a/net/ceph/crypto.c b/net/ceph/crypto.c index 92d89b331645..051d22c0e4ad 100644 --- a/net/ceph/crypto.c +++ b/net/ceph/crypto.c @@ -147,7 +147,7 @@ void ceph_crypto_key_destroy(struct ceph_crypto_key *key) static const u8 *aes_iv = (u8 *)CEPH_AES_IV; /* - * Should be used for buffers allocated with ceph_kvmalloc(). + * Should be used for buffers allocated with kvmalloc(). * Currently these are encrypt out-buffer (ceph_buffer) and decrypt * in-buffer (msg front). * diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c index 57d043b382ed..7b891be799d2 100644 --- a/net/ceph/messenger.c +++ b/net/ceph/messenger.c @@ -1920,7 +1920,7 @@ struct ceph_msg *ceph_msg_new2(int type, int front_len, int max_data_items, /* front */ if (front_len) { - m->front.iov_base = ceph_kvmalloc(front_len, flags); + m->front.iov_base = kvmalloc(front_len, flags); if (m->front.iov_base == NULL) { dout("ceph_msg_new can't allocate %d bytes\n", front_len); diff --git a/net/ceph/messenger_v2.c b/net/ceph/messenger_v2.c index cc40ce4e02fb..c4099b641b38 100644 --- a/net/ceph/messenger_v2.c +++ b/net/ceph/messenger_v2.c @@ -308,7 +308,7 @@ static void *alloc_conn_buf(struct ceph_connection *con, int len) if (WARN_ON(con->v2.conn_buf_cnt >= ARRAY_SIZE(con->v2.conn_bufs))) return NULL; - buf = ceph_kvmalloc(len, GFP_NOIO); + buf = kvmalloc(len, GFP_NOIO); if (!buf) return NULL; diff --git a/net/ceph/osdmap.c b/net/ceph/osdmap.c index 75b738083523..2823bb3cff55 100644 --- a/net/ceph/osdmap.c +++ b/net/ceph/osdmap.c @@ -980,7 +980,7 @@ static struct crush_work *alloc_workspace(const struct crush_map *c) work_size = crush_work_size(c, CEPH_PG_MAX_SIZE); dout("%s work_size %zu bytes\n", __func__, work_size); - work = ceph_kvmalloc(work_size, GFP_NOIO); + work = kvmalloc(work_size, GFP_NOIO); if (!work) return NULL; @@ -1190,9 +1190,9 @@ static int osdmap_set_max_osd(struct ceph_osdmap *map, u32 max) if (max == map->max_osd) return 0; - state = ceph_kvmalloc(array_size(max, sizeof(*state)), GFP_NOFS); - weight = ceph_kvmalloc(array_size(max, sizeof(*weight)), GFP_NOFS); - addr = ceph_kvmalloc(array_size(max, sizeof(*addr)), GFP_NOFS); + state = kvmalloc(array_size(max, sizeof(*state)), GFP_NOFS); + weight = kvmalloc(array_size(max, sizeof(*weight)), GFP_NOFS); + addr = kvmalloc(array_size(max, sizeof(*addr)), GFP_NOFS); if (!state || !weight || !addr) { kvfree(state); kvfree(weight); @@ -1222,7 +1222,7 @@ static int osdmap_set_max_osd(struct ceph_osdmap *map, u32 max) if (map->osd_primary_affinity) { u32 *affinity; - affinity = ceph_kvmalloc(array_size(max, sizeof(*affinity)), + affinity = kvmalloc(array_size(max, sizeof(*affinity)), GFP_NOFS); if (!affinity) return -ENOMEM; @@ -1503,7 +1503,7 @@ static int set_primary_affinity(struct ceph_osdmap *map, int osd, u32 aff) if (!map->osd_primary_affinity) { int i; - map->osd_primary_affinity = ceph_kvmalloc( + map->osd_primary_affinity = kvmalloc( array_size(map->max_osd, sizeof(*map->osd_primary_affinity)), GFP_NOFS); if (!map->osd_primary_affinity)