From patchwork Tue Aug 27 19:09:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 13780072 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0089C5472F for ; Tue, 27 Aug 2024 19:09:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3E6056B007B; Tue, 27 Aug 2024 15:09:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 394EA6B0083; Tue, 27 Aug 2024 15:09:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 235906B0085; Tue, 27 Aug 2024 15:09:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 0399F6B007B for ; Tue, 27 Aug 2024 15:09:23 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 8EFFE41757 for ; Tue, 27 Aug 2024 19:09:23 +0000 (UTC) X-FDA: 82498963806.12.48DD348 Received: from mail-lf1-f52.google.com (mail-lf1-f52.google.com [209.85.167.52]) by imf07.hostedemail.com (Postfix) with ESMTP id 9464040016 for ; Tue, 27 Aug 2024 19:09:21 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Kr+KrLFD; spf=pass (imf07.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.52 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724785674; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=BfcLggzto3BVfKz7eDsjOiDX7mNAoTj53MKi/5GWSlk=; b=Q4TgA6AjJ+W7SfoTT54qo6JjURKKl8sNV33TRJPH8Dm0FnAHQYCy/qWkE2JQqwyLpoLGxd dIXn5WTzkPcQVxaQxCrPrWpNDc8KFQofSEpG6Wn5lZFp9yqD36NBN+ahY+eW5iW3qeyJZT bsJWwPKYRNm0GZjvcTugMEZFsakNAb4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724785674; a=rsa-sha256; cv=none; b=F+mAvy9xws+xs48Mlba0sjPR8Pk2qUytX4ansI0hH+SCR6kvOnAsOzJ5OsyZ/F6a99Lj8e KGhF+gT4JZZNhAocySKullkDQj2KHLzppaImzXqZL+PKaM253LwLGLNS9bSjR773h+tSnW B+xkEz2DctApj2eo+JhnpcF9ynCv0S4= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Kr+KrLFD; spf=pass (imf07.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.52 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-lf1-f52.google.com with SMTP id 2adb3069b0e04-5334e41c30bso6375562e87.0 for ; Tue, 27 Aug 2024 12:09:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1724785760; x=1725390560; darn=kvack.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=BfcLggzto3BVfKz7eDsjOiDX7mNAoTj53MKi/5GWSlk=; b=Kr+KrLFDQLX1rfZMPiIXhV9UHLX1oaDTYRgvxpVcgMK3MG/Dpxw3ppajj2eHEijbdK wTlldtSFE0h6Xt5G5ZAGaE1aJZyR+3QVGVVkFnL0n3A6hWlz7oeqeKxc/m0w1EwgxcXT PhQ9Tph02C/CBtblcFZwzrY0yxllq5iRHWTaqUMDqkMYgn8PR+b8XC04Zo+o7P+vwDbq G35mr9cZfBlhWm1RQXVpqZcdDKm6oTbaLAh5P4GlrZUUCkWXxhtflCTaKbb/rXnfTA0J wmEUwvwXY9EPgFspf+fQHBwU1dg5HV42f1BtRrdOsOEQjIKALXC8z/z6IHp0+eI8lgzR 0rMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724785760; x=1725390560; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=BfcLggzto3BVfKz7eDsjOiDX7mNAoTj53MKi/5GWSlk=; b=WoSXJ+7DqF2lz0agaIrP0LXvOztl3l0ZuS7JGGP8xkBvO06QL4uAk0VWWMvaG5e3AK oe6+6ImpupvsfGA80kAaGE9KvZpt6SdqeFjkmlo4fE06ZsllAQ2HO/LKSv+GUuvx3s4s 44qRJ19ce9ozwMiABQ63f4IOLywfsDCGh31nZZnw5z6U73CyTTVT1OnnmnLn1//fFfNw k6/lAQeEJt1dE4PpF/C7bSVQEs2/EwXdvzf5RLtL3zAW8nwYdmO9STlqghwuJImQxYDS YFZEx8K0TgILzqhnxt8QbpMhMagkyG0Pmkx/aAZMT2DZ/ij8smYsXhWHYgsiZwvYQUVm LqJQ== X-Gm-Message-State: AOJu0YxkHQ6EyRUqB0iDw2Vc0a+Wa+7QhG2qr/hIIQOws3thx9j/7uK1 +QwhOk8I7AM/7laQjNtBv0vToBX7n5YbAfY+3zIzHBHFbV7lSh8ggJWzQQ== X-Google-Smtp-Source: AGHT+IGYHgWH1+SAc5cajBEXxjQgStoeA6xgPw5ISmOw+UXTYmPswXwdBMwM3U4JjNbmGnQ+2ztPSw== X-Received: by 2002:a05:6512:3e0d:b0:52c:812b:6e72 with SMTP id 2adb3069b0e04-5343876c25dmr7676576e87.1.1724785759199; Tue, 27 Aug 2024 12:09:19 -0700 (PDT) Received: from pc638.lan (84-217-131-213.customers.ownit.se. [84.217.131.213]) by smtp.gmail.com with ESMTPSA id 2adb3069b0e04-5334ea3630bsm1878284e87.68.2024.08.27.12.09.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Aug 2024 12:09:17 -0700 (PDT) From: "Uladzislau Rezki (Sony)" To: linux-mm@kvack.org, Andrew Morton Cc: LKML , Baoquan He , Christoph Hellwig , Michal Hocko , Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH] mm: vmalloc: Refactor vm_area_alloc_pages() function Date: Tue, 27 Aug 2024 21:09:16 +0200 Message-Id: <20240827190916.34242-1-urezki@gmail.com> X-Mailer: git-send-email 2.39.2 MIME-Version: 1.0 X-Stat-Signature: txrsgp8y63186uarhmdhr4q8ihrex8ex X-Rspamd-Queue-Id: 9464040016 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1724785761-498598 X-HE-Meta: U2FsdGVkX1+6qcGbvokTqSQNDmLwjVAe+bjpybi16IxmhwSXTvynk2yAGC6LOsjzZ30oWs586N/8tPN+0OoUUamwntM2YmEKlBI2EtagUBfPxvf/sOgpsIZCLekbBzIx3WGHmNhN0xYLnPVJ5L0kuhdF+a/r7VlR2+iX3Hsn9le0PduDAbc+IZn2oclRz3d/cD8nHiEcvNCdsbErtofwRSeCjnobrDRzjQUj09zrt8Pqq9u4uriCqmAcRXO0pfQmC6hbvUqXYE8iMCcOri/qZq4l1+jPn5knEQ6Tt5B7eYsEnmfJOOTamN4rKdpvQKzuXRT/VPs0log3K/sdgOaH5ldD0b/8RJsyFIBq+nJ2TSBSjP1AjibIw1WAP5lRa2607KA6yY1HxPpqLFRRA36Trxe7gZrI7AEVWE1MypMVPPvviF3aCEoekzmbTaGevbD5mlZLgc+DszICfHkDvAyxN0W3T1NixLXhE7rP8C0WqYZAx+a7S1NvMlXvBWmnOsX0HvAzcGfPazgonLcjgupQIz24A+npDaX1F1lm7zx+e6Slf8U/nK8BnhZf7zi13p6/2X+g97wv/xak/UGJOP6HF8yLGQbNLafQ+OFdFP+/UpE9MHJW11MSteR2El1/Rl/UVUj1hnbS1OoXGQuv+TO7mS0Q3iT52ofBR498s5olmyli4VyaPVP0rd5eco1CRQs8ukQUGq8VRJ969wy7/WusqZzbGjFTtwr5p/spAR2VSG8lICG54l9uesW1TdRaYlSJkQznB0u31yiQY6CgVd2b8WJqPq0qQ/ctnOaucAdUdTcGeocnuUmyULHT3YgMDttaCcKn1BTfAO7O/uvTBxzeNPfk9mVAGY3FnjfxwBHzBIu5fMIjEzyyLUEOKHuKqwgCLhTbHe4ZN9stVgldjirxDm1CzqZr8lX42RYfredZ1cWoUuHPvoiQI9EsU6fKEIpJh5GjcjYDVoyLHH+tkU5 9CeXRuyn 9BeB1LKAseDTNQFC8jiTPUvV3Whb74u32T8r8g7DKcC6RkIijaX9Ac5Gye0YXdnGpIqtaT6lNOBF2kG0i1if3TGnscKPdNlkyNg1OVQ7VdWCV648l5JOrdzuER4hdlx3KJSIPWMtRio5NzgRu7rVuvy9EXSJ13m/u/UwejJVMQ1NAPgnFgk5VPO7MwFBL/eVzglDd283tP8wjIWrGU1++8L0I3GC3/bmen0Z/o5HGPYaysC1nFsIiVvdSrdL3WFCbgbznIq0IBhcZj2Leq+1nayiXc0ADuSO5fzm5nfDK4HweRCwaPua1Dv9CvRnD5u03u046fyAb9zjWKl4+JVgo3202BnpNXdvFRWyYoESteEvfu0cdPFaDLFfkwtshFDEOJt5qtVgONwnoxpbVZk4y85qkjOwAN70ZH3U5llxMQOLJ3xM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The aim is to simplify and making the vm_area_alloc_pages() function less confusing as it became more clogged nowadays: - eliminate a "bulk_gfp" variable and do not overwrite a gfp flag for bulk allocator; - drop __GFP_NOFAIL flag for high-order-page requests on upper layer. It becomes less spread between levels when it comes to __GFP_NOFAIL allocations; - add a comment about a fallback path if high-order attempt is unsuccessful because for such cases __GFP_NOFAIL is dropped; - fix a typo in a commit message. Signed-off-by: Uladzislau Rezki (Sony) Acked-by: Michal Hocko Reviewed-by: Baoquan He --- mm/vmalloc.c | 37 +++++++++++++++++-------------------- 1 file changed, 17 insertions(+), 20 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 3f9b6bd707d2..57862865e808 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3531,8 +3531,6 @@ vm_area_alloc_pages(gfp_t gfp, int nid, unsigned int order, unsigned int nr_pages, struct page **pages) { unsigned int nr_allocated = 0; - gfp_t alloc_gfp = gfp; - bool nofail = gfp & __GFP_NOFAIL; struct page *page; int i; @@ -3543,9 +3541,6 @@ vm_area_alloc_pages(gfp_t gfp, int nid, * more permissive. */ if (!order) { - /* bulk allocator doesn't support nofail req. officially */ - gfp_t bulk_gfp = gfp & ~__GFP_NOFAIL; - while (nr_allocated < nr_pages) { unsigned int nr, nr_pages_request; @@ -3563,12 +3558,11 @@ vm_area_alloc_pages(gfp_t gfp, int nid, * but mempolicy wants to alloc memory by interleaving. */ if (IS_ENABLED(CONFIG_NUMA) && nid == NUMA_NO_NODE) - nr = alloc_pages_bulk_array_mempolicy_noprof(bulk_gfp, + nr = alloc_pages_bulk_array_mempolicy_noprof(gfp, nr_pages_request, pages + nr_allocated); - else - nr = alloc_pages_bulk_array_node_noprof(bulk_gfp, nid, + nr = alloc_pages_bulk_array_node_noprof(gfp, nid, nr_pages_request, pages + nr_allocated); @@ -3582,30 +3576,24 @@ vm_area_alloc_pages(gfp_t gfp, int nid, if (nr != nr_pages_request) break; } - } else if (gfp & __GFP_NOFAIL) { - /* - * Higher order nofail allocations are really expensive and - * potentially dangerous (pre-mature OOM, disruptive reclaim - * and compaction etc. - */ - alloc_gfp &= ~__GFP_NOFAIL; } /* High-order pages or fallback path if "bulk" fails. */ while (nr_allocated < nr_pages) { - if (!nofail && fatal_signal_pending(current)) + if (!(gfp & __GFP_NOFAIL) && fatal_signal_pending(current)) break; if (nid == NUMA_NO_NODE) - page = alloc_pages_noprof(alloc_gfp, order); + page = alloc_pages_noprof(gfp, order); else - page = alloc_pages_node_noprof(nid, alloc_gfp, order); + page = alloc_pages_node_noprof(nid, gfp, order); + if (unlikely(!page)) break; /* * Higher order allocations must be able to be treated as - * indepdenent small pages by callers (as they can with + * independent small pages by callers (as they can with * small-page vmallocs). Some drivers do their own refcounting * on vmalloc_to_page() pages, some use page->mapping, * page->lru, etc. @@ -3666,7 +3654,16 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, set_vm_area_page_order(area, page_shift - PAGE_SHIFT); page_order = vm_area_page_order(area); - area->nr_pages = vm_area_alloc_pages(gfp_mask | __GFP_NOWARN, + /* + * Higher order nofail allocations are really expensive and + * potentially dangerous (pre-mature OOM, disruptive reclaim + * and compaction etc. + * + * Please note, the __vmalloc_node_range_noprof() falls-back + * to order-0 pages if high-order attempt is unsuccessful. + */ + area->nr_pages = vm_area_alloc_pages((page_order ? + gfp_mask & ~__GFP_NOFAIL : gfp_mask) | __GFP_NOWARN, node, page_order, nr_small_pages, area->pages); atomic_long_add(area->nr_pages, &nr_vmalloc_pages);