From patchwork Mon Jan 31 14:40:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 12730730 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D946DC433F5 for ; Mon, 31 Jan 2022 14:41:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 09B736B00A7; Mon, 31 Jan 2022 09:41:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 04A906B00A9; Mon, 31 Jan 2022 09:41:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E55046B00AA; Mon, 31 Jan 2022 09:41:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay032.a.hostedemail.com [64.99.140.32]) by kanga.kvack.org (Postfix) with ESMTP id D33BB6B00A7 for ; Mon, 31 Jan 2022 09:41:09 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 9499980B9D for ; Mon, 31 Jan 2022 14:41:09 +0000 (UTC) X-FDA: 79090844658.03.0ADFBDD Received: from mail-lf1-f41.google.com (mail-lf1-f41.google.com [209.85.167.41]) by imf17.hostedemail.com (Postfix) with ESMTP id 34AE240003 for ; Mon, 31 Jan 2022 14:41:09 +0000 (UTC) Received: by mail-lf1-f41.google.com with SMTP id y15so27220270lfa.9 for ; Mon, 31 Jan 2022 06:41:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=YTDMLLL9wbIhmayIu01fXxrx4uBIFemwXrq+zvvqr2U=; b=ax77L6WROLWBWjZyRjxBD2S90Qc8JR3gPPDnwsNDKYZ+XqnWYWNi/0wjub1MGKDivX 1bcy/RAJkOlPsVkePVfs2xcYgVbn37CV+l/4Sr9u6R93NWDzayPgrywZeHz/RLhk0iZF EJgsZHqUVOYRGhRza9L0JKFLil4bQ/rQ8jtED+aczIeVHo6X7QJdaFRrAeRsmAiMlFpQ vZP8A2W2aQGDHFQRoyeM/ldCDBUT+8gVtm9P7VtzGNDYq0xaoqemLbkIdGFAn2hx3H7R Vn04XnxKK1NM37KD/ZJi/XEUd8y9wy4qVYQSUl7ndtkrJRN9zLTizkU8KPeONzYFfiCp 3gYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=YTDMLLL9wbIhmayIu01fXxrx4uBIFemwXrq+zvvqr2U=; b=b0q3k60ruOjJE+MXRzfbgbleSuhbFeMLVICdILCMw7jwP7dER8+KOkF4m3U3iiiVEQ +fjxEjU+Btw2ZynYclp7jL4cF94jAnty9SdJ3T2ZYrV3kOMl7nbwF14ccxDUST+1TqDU 5N9OFBi+KW1/DzIehClYIhW7LSdfkLLWgoAWUzpiHE/9FegJPT9VhBS/u/RvvuyVb6U0 5cw0nSXkM1Zaz/MBIJzbCJhcAZ/NKVr9LPGbAto6Z7sAA5xy4SJ895Iz7EAowKmLc6aJ pAsk3Ns67Ys/0Irv2DDNIsrwU8BmMbBFavJw8q90x6dpPHah1/UfYuoZS+luQK0UIJzI vw8w== X-Gm-Message-State: AOAM5331pMV3W5QXME2n8LV56tMxFqVlU/Qc1M70YulKOu+YspnxhaXc UKgfIgtyGOWepxAkNZ06rco= X-Google-Smtp-Source: ABdhPJzUixN6LMjgbInq2cdJPPTgt/c+gGvYkyRxjepVyzNznLE0T9ElCf8vFjsYVk3sdhGkpTc/Xw== X-Received: by 2002:a05:6512:2144:: with SMTP id s4mr13972034lfr.504.1643640067474; Mon, 31 Jan 2022 06:41:07 -0800 (PST) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id y32sm3670582lfa.15.2022.01.31.06.41.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Jan 2022 06:41:06 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: Andrew Morton Cc: linux-mm@kvack.org, LKML , Christoph Hellwig , Matthew Wilcox , Nicholas Piggin , Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH v3 1/1] mm/vmalloc: Move draining areas out of caller context Date: Mon, 31 Jan 2022 15:40:58 +0100 Message-Id: <20220131144058.35608-1-urezki@gmail.com> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 34AE240003 X-Stat-Signature: 19pds4k8dkgfds9fuej7j4msq4efergm X-Rspam-User: nil Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=ax77L6WR; spf=pass (imf17.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.41 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-HE-Tag: 1643640069-303684 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: A caller initiates the drain procces from its context once the drain threshold is reached or passed. There are at least two drawbacks of doing so: a) a caller can be a high-prio or RT task. In that case it can stuck in doing the actual drain of all lazily freed areas. This is not optimal because such tasks usually are latency sensitive where the control should be returned back as soon as possible in order to drive such workloads in time. See 96e2db456135 ("mm/vmalloc: rework the drain logic") b) It is not safe to call vfree() during holding a spinlock due to the vmap_purge_lock mutex. The was a report about this from Zeal Robot here: https://lore.kernel.org/all/20211222081026.484058-1-chi.minghao@zte.com.cn Moving the drain to the separate work context addresses those issues. v1->v2: - Added prefix "_work" to the drain worker function. v2->v3: - Remove the drain_vmap_work_in_progress. Extra queuing is expectable under heavy load but it can be disregarded because a work will bail out if nothing to be done. Signed-off-by: Uladzislau Rezki (Sony) Reviewed-by: Christoph Hellwig --- mm/vmalloc.c | 30 +++++++++++++++++------------- 1 file changed, 17 insertions(+), 13 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index bdc7222f87d4..5d721542bed7 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -793,6 +793,8 @@ RB_DECLARE_CALLBACKS_MAX(static, free_vmap_area_rb_augment_cb, static void purge_vmap_area_lazy(void); static BLOCKING_NOTIFIER_HEAD(vmap_notify_list); static unsigned long lazy_max_pages(void); +static void drain_vmap_area_work(struct work_struct *work); +static DECLARE_WORK(drain_vmap_work, drain_vmap_area_work); static atomic_long_t nr_vmalloc_pages; @@ -1719,18 +1721,6 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) return true; } -/* - * Kick off a purge of the outstanding lazy areas. Don't bother if somebody - * is already purging. - */ -static void try_purge_vmap_area_lazy(void) -{ - if (mutex_trylock(&vmap_purge_lock)) { - __purge_vmap_area_lazy(ULONG_MAX, 0); - mutex_unlock(&vmap_purge_lock); - } -} - /* * Kick off a purge of the outstanding lazy areas. */ @@ -1742,6 +1732,20 @@ static void purge_vmap_area_lazy(void) mutex_unlock(&vmap_purge_lock); } +static void drain_vmap_area_work(struct work_struct *work) +{ + unsigned long nr_lazy; + + do { + mutex_lock(&vmap_purge_lock); + __purge_vmap_area_lazy(ULONG_MAX, 0); + mutex_unlock(&vmap_purge_lock); + + /* Recheck if further work is required. */ + nr_lazy = atomic_long_read(&vmap_lazy_nr); + } while (nr_lazy > lazy_max_pages()); +} + /* * Free a vmap area, caller ensuring that the area has been unmapped * and flush_cache_vunmap had been called for the correct range @@ -1768,7 +1772,7 @@ static void free_vmap_area_noflush(struct vmap_area *va) /* After this point, we may free va at any time */ if (unlikely(nr_lazy > lazy_max_pages())) - try_purge_vmap_area_lazy(); + schedule_work(&drain_vmap_work); } /*