From patchwork Wed Jan 19 14:35:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 12717584 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0CE91C433EF for ; Wed, 19 Jan 2022 14:35:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 59B186B0071; Wed, 19 Jan 2022 09:35:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 549CC6B0073; Wed, 19 Jan 2022 09:35:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 411E66B0074; Wed, 19 Jan 2022 09:35:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0206.hostedemail.com [216.40.44.206]) by kanga.kvack.org (Postfix) with ESMTP id 30E7A6B0071 for ; Wed, 19 Jan 2022 09:35:52 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id D240696769 for ; Wed, 19 Jan 2022 14:35:51 +0000 (UTC) X-FDA: 79047285702.26.231032B Received: from mail-lf1-f49.google.com (mail-lf1-f49.google.com [209.85.167.49]) by imf07.hostedemail.com (Postfix) with ESMTP id 2E9BC40009 for ; Wed, 19 Jan 2022 14:35:51 +0000 (UTC) Received: by mail-lf1-f49.google.com with SMTP id b14so10243682lff.3 for ; Wed, 19 Jan 2022 06:35:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=2gtcKcgtvmZFmg0VIhIPJVczT0Y4Ko4N7yPLo6FJsaU=; b=lzZKq0uGA2c6dIndnc5wdhXHfoLCK5xKfp7ly5jNT707FGENvzPiJH/ewDeGhdVa4b JxnKFpFUPub/KjBvWghZnhI3DUxcyKZhpUvLtdKjxHLH91XCaNmP5isZyXDGmjumuH6B DZdNUZKHKC6kx5AHZwio+dQZTJllETvQaumz5FoyqmFpmX8i7y4BEGqp1Gm/uKsD9e// wxvTGwVOE8IijpHKBsNBzobvUDmE0JNV7McdWTIbLBjS5gSnvsu4eHwcgWMJ8R0nS2T4 DuUfThUpIUEETRsosEmwM/ETJeHL+TkP/uLOsKsYVK5BjEamBjyNcmaq4Ov1jM2abrhh kKIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=2gtcKcgtvmZFmg0VIhIPJVczT0Y4Ko4N7yPLo6FJsaU=; b=f04uJiJSQOf07ybqEyg+/qMCVs1xLdbRblcssrdIpKLOi0OoOaK05iEeRFrubObm0l DZ03wVltcEhIbwgcJuNh38VnwBRJVhUdnthBFt/H6AjB6GAZUMinUQxZLwCoj8OIKLhp BufL/2n/3FneGo5SbT4XonoDXJ7BZP442kHJRF5H34hPUwrZ12+3AchfVyRdqsTNtUnZ ljwezeIiaLdVZy9K2A9ze5GQSWKDKH+q9p7krC5g/uSxcsVwr2ehZynVa2ZhSl/3l+iF s9KQkcX3m3zshNpqvTpUDu+ObwGg+iCZJHaJoY6usk/CcXvsWHZMAu+7nFgILVvxIdcC QDww== X-Gm-Message-State: AOAM533KICu7lI7WIqQRUzdrhDEm1Kv/MU+Uo2OcZu9H9L6ov4dZx9Ou KzZPU34Pr7F5d3WrZpd12qTqGa+ILLzeQw== X-Google-Smtp-Source: ABdhPJw5YZzp+Q846wPrtgIOdcgHDx7WVgT8XVUuQKHegy1zOKH6J7u/8aiSSyZp4jd/+1ihhYC3UA== X-Received: by 2002:a2e:96c5:: with SMTP id d5mr3490755ljj.527.1642602949660; Wed, 19 Jan 2022 06:35:49 -0800 (PST) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id z13sm179943lfr.183.2022.01.19.06.35.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Jan 2022 06:35:49 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: Andrew Morton Cc: linux-mm@kvack.org, LKML , Christoph Hellwig , Matthew Wilcox , Nicholas Piggin , Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH 1/3] mm/vmalloc: Move draining areas out of caller context Date: Wed, 19 Jan 2022 15:35:38 +0100 Message-Id: <20220119143540.601149-1-urezki@gmail.com> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 2E9BC40009 X-Stat-Signature: tmosk9ruhfsdwfw1csbgefjjdmmrwar9 Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=lzZKq0uG; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf07.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.49 as permitted sender) smtp.mailfrom=urezki@gmail.com X-HE-Tag: 1642602951-728098 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: A caller initiates the drain procces from its context once the drain threshold is reached or passed. There are at least two drawbacks of doing so: a) a caller can be a high-prio or RT task. In that case it can stuck in doing the actual drain of all lazily freed areas. This is not optimal because such tasks usually are latency sensitive where the control should be returned back as soon as possible in order to drive such workloads in time. See 96e2db456135 ("mm/vmalloc: rework the drain logic") b) It is not safe to call vfree() during holding a spinlock due to the vmap_purge_lock mutex. The was a report about this from Zeal Robot here: https://lore.kernel.org/all/20211222081026.484058-1-chi.minghao@zte.com.cn Moving the drain to the separate work context addresses those issues. Signed-off-by: Uladzislau Rezki (Sony) --- mm/vmalloc.c | 35 ++++++++++++++++++++++------------- 1 file changed, 22 insertions(+), 13 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index bdc7222f87d4..ed0f9eaa61a9 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -793,6 +793,9 @@ RB_DECLARE_CALLBACKS_MAX(static, free_vmap_area_rb_augment_cb, static void purge_vmap_area_lazy(void); static BLOCKING_NOTIFIER_HEAD(vmap_notify_list); static unsigned long lazy_max_pages(void); +static void drain_vmap_area(struct work_struct *work); +static DECLARE_WORK(drain_vmap_area_work, drain_vmap_area); +static atomic_t drain_vmap_area_work_in_progress; static atomic_long_t nr_vmalloc_pages; @@ -1719,18 +1722,6 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) return true; } -/* - * Kick off a purge of the outstanding lazy areas. Don't bother if somebody - * is already purging. - */ -static void try_purge_vmap_area_lazy(void) -{ - if (mutex_trylock(&vmap_purge_lock)) { - __purge_vmap_area_lazy(ULONG_MAX, 0); - mutex_unlock(&vmap_purge_lock); - } -} - /* * Kick off a purge of the outstanding lazy areas. */ @@ -1742,6 +1733,23 @@ static void purge_vmap_area_lazy(void) mutex_unlock(&vmap_purge_lock); } +static void drain_vmap_area(struct work_struct *work) +{ + unsigned long nr_lazy; + + do { + mutex_lock(&vmap_purge_lock); + __purge_vmap_area_lazy(ULONG_MAX, 0); + mutex_unlock(&vmap_purge_lock); + + /* Recheck if further work is required. */ + nr_lazy = atomic_long_read(&vmap_lazy_nr); + } while (nr_lazy > lazy_max_pages()); + + /* We are done at this point. */ + atomic_set(&drain_vmap_area_work_in_progress, 0); +} + /* * Free a vmap area, caller ensuring that the area has been unmapped * and flush_cache_vunmap had been called for the correct range @@ -1768,7 +1776,8 @@ static void free_vmap_area_noflush(struct vmap_area *va) /* After this point, we may free va at any time */ if (unlikely(nr_lazy > lazy_max_pages())) - try_purge_vmap_area_lazy(); + if (!atomic_xchg(&drain_vmap_area_work_in_progress, 1)) + schedule_work(&drain_vmap_area_work); } /* From patchwork Wed Jan 19 14:35:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 12717585 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37EA0C433F5 for ; Wed, 19 Jan 2022 14:35:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CEEA26B0073; Wed, 19 Jan 2022 09:35:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C9C726B0074; Wed, 19 Jan 2022 09:35:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B642F6B0075; Wed, 19 Jan 2022 09:35:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0077.hostedemail.com [216.40.44.77]) by kanga.kvack.org (Postfix) with ESMTP id A83916B0073 for ; Wed, 19 Jan 2022 09:35:52 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 62FE2811D43B for ; Wed, 19 Jan 2022 14:35:52 +0000 (UTC) X-FDA: 79047285744.09.0319C1E Received: from mail-lf1-f42.google.com (mail-lf1-f42.google.com [209.85.167.42]) by imf16.hostedemail.com (Postfix) with ESMTP id 17403180013 for ; Wed, 19 Jan 2022 14:35:51 +0000 (UTC) Received: by mail-lf1-f42.google.com with SMTP id m3so10429002lfu.0 for ; Wed, 19 Jan 2022 06:35:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=j/VfafbxJ+vp6TLOD2d5aFbCUTwMb6Qu1uU6PUy6ZDY=; b=n4drYZuOGRVaLSjc8TCdkAEho4IJaYZ60oRRsBXU3CqKfIbodkO13DsVi9mKd75nEL h52jVxTTNYbXogL7r90cYBS0+S1KXGaHn4Akz7c3zMGhgX41CFjlZaz7pqiEvukt6Enn jKH0UKNqgSIxqEIvo6BYKPCJ2ro3lW6ACAwta68WfbVY9tbQDQ08s2yQ/kRuCqFLItPS N3sIu75MJeH83hoA30XnRH1YTbW44XuRWc3k5t9ogPfsmbkbCbFJctpZ/l1Gd84xZHx8 k70WEJcOILSZ27+O7OC2el6tOUg6PqstDykMHzl26HULJigKqwq2FCLOPczdOAGwInQq dfug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=j/VfafbxJ+vp6TLOD2d5aFbCUTwMb6Qu1uU6PUy6ZDY=; b=ZX87WFjuAfyQDw8eSwxf8USeSf9WMstF9Dy4kpdoBGYKtOFS8XAUsZmMdNPdxXenNp ZkfFPO4/8bObIrH4Cy6XAelsglZM5eEfrDVCQxT7u0UIKnuZdEF0edKmvu1Djwe1eecO etiDp6EKxe1m+IBF5a/NbN7TToLrzdJhLGjrUIPksYVEEt5B0Rx5KRAkMqcSDHfJ8KLC oK+cI7Tc5y66YivYpd6fZkJKiLm0C84m1nALApvyjY5dQWeqdn8iuGoiJ2w5ambH99OK aGx+cUWgXA83NpWB0JagGCfdWTGdqpJFASw+IaSt4uLpmHaGS5uAUw9TnFOpEiV/+ghF /Ihw== X-Gm-Message-State: AOAM533VFU8kCTEHOg2j7m/b5+uo/WdNBLQfu3MuAZ/e/gsqeQo/Pkxb J80+3wg1SKt9r+MTbQQIttI= X-Google-Smtp-Source: ABdhPJzM4vt8SBI2cvh6mvOuLZiZ9NRRzOvQ0dRdNvjMgpr1qMTGrZphWnTn9S8hUfj9Rxn72EwbnQ== X-Received: by 2002:a05:651c:30b:: with SMTP id a11mr19706790ljp.458.1642602950546; Wed, 19 Jan 2022 06:35:50 -0800 (PST) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id z13sm179943lfr.183.2022.01.19.06.35.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Jan 2022 06:35:50 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: Andrew Morton Cc: linux-mm@kvack.org, LKML , Christoph Hellwig , Matthew Wilcox , Nicholas Piggin , Uladzislau Rezki , Oleksiy Avramchenko , Uladzislau Rezki Subject: [PATCH 2/3] mm/vmalloc: Add adjust_search_size parameter Date: Wed, 19 Jan 2022 15:35:39 +0100 Message-Id: <20220119143540.601149-2-urezki@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220119143540.601149-1-urezki@gmail.com> References: <20220119143540.601149-1-urezki@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 17403180013 X-Stat-Signature: hw1x5hhbr373ifbyxkazzhfpk885313d Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=n4drYZuO; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf16.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.42 as permitted sender) smtp.mailfrom=urezki@gmail.com X-Rspamd-Server: rspam03 X-HE-Tag: 1642602951-699764 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Uladzislau Rezki Extend the find_vmap_lowest_match() function with one more parameter. It is "adjust_search_size" boolean variable, so it is possible to control an accuracy of search block if a specific alignment is required. With this patch, a search size is always adjusted, to serve a request as fast as possible because of performance reason. But there is one exception though, it is short ranges where requested size corresponds to passed vstart/vend restriction together with a specific alignment request. In such scenario an adjustment wold not lead to success allocation. Signed-off-by: Uladzislau Rezki Signed-off-by: Uladzislau Rezki (Sony) --- mm/vmalloc.c | 37 ++++++++++++++++++++++++++++--------- 1 file changed, 28 insertions(+), 9 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index ed0f9eaa61a9..52ee67107046 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1192,22 +1192,28 @@ is_within_this_va(struct vmap_area *va, unsigned long size, /* * Find the first free block(lowest start address) in the tree, * that will accomplish the request corresponding to passing - * parameters. + * parameters. Please note, with an alignment bigger than PAGE_SIZE, + * a search length is adjusted to account for worst case alignment + * overhead. */ static __always_inline struct vmap_area * -find_vmap_lowest_match(unsigned long size, - unsigned long align, unsigned long vstart) +find_vmap_lowest_match(unsigned long size, unsigned long align, + unsigned long vstart, bool adjust_search_size) { struct vmap_area *va; struct rb_node *node; + unsigned long length; /* Start from the root. */ node = free_vmap_area_root.rb_node; + /* Adjust the search size for alignment overhead. */ + length = adjust_search_size ? size + align - 1 : size; + while (node) { va = rb_entry(node, struct vmap_area, rb_node); - if (get_subtree_max_size(node->rb_left) >= size && + if (get_subtree_max_size(node->rb_left) >= length && vstart < va->va_start) { node = node->rb_left; } else { @@ -1217,9 +1223,9 @@ find_vmap_lowest_match(unsigned long size, /* * Does not make sense to go deeper towards the right * sub-tree if it does not have a free block that is - * equal or bigger to the requested search size. + * equal or bigger to the requested search length. */ - if (get_subtree_max_size(node->rb_right) >= size) { + if (get_subtree_max_size(node->rb_right) >= length) { node = node->rb_right; continue; } @@ -1235,7 +1241,7 @@ find_vmap_lowest_match(unsigned long size, if (is_within_this_va(va, size, align, vstart)) return va; - if (get_subtree_max_size(node->rb_right) >= size && + if (get_subtree_max_size(node->rb_right) >= length && vstart <= va->va_start) { /* * Shift the vstart forward. Please note, we update it with @@ -1283,7 +1289,7 @@ find_vmap_lowest_match_check(unsigned long size, unsigned long align) get_random_bytes(&rnd, sizeof(rnd)); vstart = VMALLOC_START + rnd; - va_1 = find_vmap_lowest_match(size, align, vstart); + va_1 = find_vmap_lowest_match(size, align, vstart, false); va_2 = find_vmap_lowest_linear_match(size, align, vstart); if (va_1 != va_2) @@ -1434,12 +1440,25 @@ static __always_inline unsigned long __alloc_vmap_area(unsigned long size, unsigned long align, unsigned long vstart, unsigned long vend) { + bool adjust_search_size = true; unsigned long nva_start_addr; struct vmap_area *va; enum fit_type type; int ret; - va = find_vmap_lowest_match(size, align, vstart); + /* + * Do not adjust when: + * a) align <= PAGE_SIZE, because it does not make any sense. + * All blocks(their start addresses) are at least PAGE_SIZE + * aligned anyway; + * b) a short range where a requested size corresponds to exactly + * specified [vstart:vend] interval and an alignment > PAGE_SIZE. + * With adjusted search length an allocation would not succeed. + */ + if (align <= PAGE_SIZE || (align > PAGE_SIZE && (vend - vstart) == size)) + adjust_search_size = false; + + va = find_vmap_lowest_match(size, align, vstart, adjust_search_size); if (unlikely(!va)) return vend; From patchwork Wed Jan 19 14:35:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 12717586 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B184BC433F5 for ; Wed, 19 Jan 2022 14:35:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 32A766B0074; Wed, 19 Jan 2022 09:35:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2B3EF6B0075; Wed, 19 Jan 2022 09:35:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 12C686B0078; Wed, 19 Jan 2022 09:35:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay027.a.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 05B3E6B0074 for ; Wed, 19 Jan 2022 09:35:55 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id D172D20FD8 for ; Wed, 19 Jan 2022 14:35:54 +0000 (UTC) X-FDA: 79047285828.02.BEC62A4 Received: from mail-lf1-f46.google.com (mail-lf1-f46.google.com [209.85.167.46]) by imf17.hostedemail.com (Postfix) with ESMTP id 915B140005 for ; Wed, 19 Jan 2022 14:35:53 +0000 (UTC) Received: by mail-lf1-f46.google.com with SMTP id p27so10287552lfa.1 for ; Wed, 19 Jan 2022 06:35:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=z9oiupD+98sSYT2szp0bLOwe4Lp388kc2c/Co3Zp20I=; b=DkBxkk35wBe59Gksv6JiqbrdxzGUwLOrGyTJhhy8vVZTCT5xTk25u+QwCD8Zc835cC A5DcJiMy9xFo99+1hA5oSRiGEboKjGKZIOZXBDxCr7b+IjjCzsdoK1x9cf7lHsZSo2Cp amV/ET45+C79+LvO7+T88ztmz+nBGat4of/DRX1qGD8hL8u6RET2eybE3ko/j+tFJxMQ pFL0fAfH/GtYiC/zcJlNxq3uNWC8NCEXXsXTV4Vb2Awyu35xssXPfb7d6LTqpWtT9pjH iKT9rySlIXxTw3OVrhqQ66IWHHQVnPV31vjBw6rgRtwmhQfRRzB42hOOwYkgwzest7wF QpHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=z9oiupD+98sSYT2szp0bLOwe4Lp388kc2c/Co3Zp20I=; b=68XA6w8QzE9YbaHo9pp4vFsRde7nG24zlUL1POURhbnKl8frpTYlPjic9mX6rNt4X9 zJ9GiKRyDjgfZDwhBWIEcLuXBMo46iKsvBUs/Grxwiei9aSejvf8ebTeorTc+a490L1q QeGwilU9zLEIjaitaIpXul4fu5+udRcdhuSDouz42ZoCXkk+QPgu//HRUXiX1g2wBgsE UpV0PKL8VC8j7ZRPN7GE62cKNranpDfIFHG9NNfvcglM4jIED5PJjQrmX5Cvpu7+PeYr XF7zdT8Bh7KCiwW6yv5y8/yyaejaNz8mdD7p9+8s33A8G/P6IhjjU8Pzvew/mbXR8AUv n4sg== X-Gm-Message-State: AOAM53076JGjyLZSOs0fy/Fk2rWQhMMr007zTVrK8N4efhKWL+ZshHyL JMU1uht9XtunoyTaAl3QFw4= X-Google-Smtp-Source: ABdhPJziXjs/bTyN3grnU5EX0UYhEFJOpdfOQmpkVInGM8LY6uzrjQhrAVII7Ys9JfV4ARZEMRyGXQ== X-Received: by 2002:ac2:5c41:: with SMTP id s1mr28901657lfp.440.1642602952277; Wed, 19 Jan 2022 06:35:52 -0800 (PST) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id z13sm179943lfr.183.2022.01.19.06.35.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Jan 2022 06:35:50 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: Andrew Morton Cc: linux-mm@kvack.org, LKML , Christoph Hellwig , Matthew Wilcox , Nicholas Piggin , Uladzislau Rezki , Oleksiy Avramchenko , Vasily Averin Subject: [PATCH 3/3] mm/vmalloc: Eliminate an extra orig_gfp_mask Date: Wed, 19 Jan 2022 15:35:40 +0100 Message-Id: <20220119143540.601149-3-urezki@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220119143540.601149-1-urezki@gmail.com> References: <20220119143540.601149-1-urezki@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 915B140005 X-Stat-Signature: qnoygbc9xq1enz7renbfe4pws5tmx9z5 Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=DkBxkk35; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf17.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.46 as permitted sender) smtp.mailfrom=urezki@gmail.com X-HE-Tag: 1642602953-524399 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: That extra variable has been introduced just for keeping an original passed gfp_mask because it is updated with __GFP_NOWARN on entry, thus error handling messages were broken. Instead we can keep an original gfp_mask without modifying it and add an extra __GFP_NOWARN flag together with gfp_mask as a parameter to the vm_area_alloc_pages() function. It will make it less confused. Cc: Vasily Averin Signed-off-by: Uladzislau Rezki (Sony) --- mm/vmalloc.c | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 52ee67107046..04edd32ba6bc 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2953,7 +2953,6 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, int node) { const gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO; - const gfp_t orig_gfp_mask = gfp_mask; bool nofail = gfp_mask & __GFP_NOFAIL; unsigned long addr = (unsigned long)area->addr; unsigned long size = get_vm_area_size(area); @@ -2967,7 +2966,6 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, max_small_pages = ALIGN(size, 1UL << page_shift) >> PAGE_SHIFT; array_size = (unsigned long)max_small_pages * sizeof(struct page *); - gfp_mask |= __GFP_NOWARN; if (!(gfp_mask & (GFP_DMA | GFP_DMA32))) gfp_mask |= __GFP_HIGHMEM; @@ -2980,7 +2978,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, } if (!area->pages) { - warn_alloc(orig_gfp_mask, NULL, + warn_alloc(gfp_mask, NULL, "vmalloc error: size %lu, failed to allocated page array size %lu", nr_small_pages * PAGE_SIZE, array_size); free_vm_area(area); @@ -2990,8 +2988,8 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, set_vm_area_page_order(area, page_shift - PAGE_SHIFT); page_order = vm_area_page_order(area); - area->nr_pages = vm_area_alloc_pages(gfp_mask, node, - page_order, nr_small_pages, area->pages); + area->nr_pages = vm_area_alloc_pages(gfp_mask | __GFP_NOWARN, + node, page_order, nr_small_pages, area->pages); atomic_long_add(area->nr_pages, &nr_vmalloc_pages); if (gfp_mask & __GFP_ACCOUNT) { @@ -3007,7 +3005,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, * allocation request, free them via __vfree() if any. */ if (area->nr_pages != nr_small_pages) { - warn_alloc(orig_gfp_mask, NULL, + warn_alloc(gfp_mask, NULL, "vmalloc error: size %lu, page order %u, failed to allocate pages", area->nr_pages * PAGE_SIZE, page_order); goto fail; @@ -3035,7 +3033,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, memalloc_noio_restore(flags); if (ret < 0) { - warn_alloc(orig_gfp_mask, NULL, + warn_alloc(gfp_mask, NULL, "vmalloc error: size %lu, failed to map pages", area->nr_pages * PAGE_SIZE); goto fail;