From patchwork Tue Jan 25 16:39:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 12724015 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB300C433F5 for ; Tue, 25 Jan 2022 16:39:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5194E6B00A6; Tue, 25 Jan 2022 11:39:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 49F666B00A7; Tue, 25 Jan 2022 11:39:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3195D6B00A8; Tue, 25 Jan 2022 11:39:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0025.hostedemail.com [216.40.44.25]) by kanga.kvack.org (Postfix) with ESMTP id 1BE846B00A6 for ; Tue, 25 Jan 2022 11:39:30 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id CB739944F6 for ; Tue, 25 Jan 2022 16:39:29 +0000 (UTC) X-FDA: 79069370058.27.2EE2C4D Received: from mail-lj1-f173.google.com (mail-lj1-f173.google.com [209.85.208.173]) by imf21.hostedemail.com (Postfix) with ESMTP id 551DF1C0059 for ; Tue, 25 Jan 2022 16:39:29 +0000 (UTC) Received: by mail-lj1-f173.google.com with SMTP id z7so13344148ljj.4 for ; Tue, 25 Jan 2022 08:39:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=FDpf/tqpMlV9Ib2LRvV4j3vhlXhSgAj3mSGN2d7ZcYY=; b=FbkCyN5KWevqIjBzepzV3zujCRXPPi2w+x9Sf9h+1yZiy/ezQGqJB/9ej79Rz2tHmQ cpySobBwvJqOilvtiU6wle2CgH+aBeeHmbLPiWaRsjPjnCXD8bRH20XKM6CtDAHBYSm3 IYukg5lNcTem467DaJZtbq1Ud8hhXDyp3BUJ1PHt3Xn/jk2Bw27IVxJLGYutUaBQgFjr yTwdpP/0yv3FR+qonYINVRoVCiTvLB55d9K/VtCiFg6nasKfhPviVUQ7lBfEQh4QRjpv wGFss31oaumvdnWDDHdh6I/x9fUPVTpBsqz1QP/o05RvFjcXxzUJtjVe5pvQU1wuenHf J+0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=FDpf/tqpMlV9Ib2LRvV4j3vhlXhSgAj3mSGN2d7ZcYY=; b=iUz4AJjQ5ngEX/d6LZ6CSxxFfzOUI5ysfd7g4otZNdERZmr3sVSQwyvJSbVbwjX2rN nc8/Bi59Ll6OzfZjnbtlgkpsgvmSFa/uPiIVOuYcnLu+t8yQX6+uVbcBaQlLwhp9UQc9 oqneNVzL9VQZu/uTve/s1UdoIptzNDfkta6jth03AriTUVY2c6QOnFUEjIS21xEYapMY iYJ3VRnyV5UN2QKas4asSyKkgivKjES3A4niEsNw8aXn+Ud/CmAOHnsiWUcKQ5J7q6/K MIP8/1tHq3/Ii/X8TU6A4l/9zHI6Uf/A7LDXxpBaYCXsfXNyznsv1fwBbxUTPnYVwWTF iavw== X-Gm-Message-State: AOAM5300tJQzgUgPSoDtTLJYSZf8Fp2P2D98C8C3I31dpBTr2AopDbZu yQSKHVH5Ex/lXxPGxBGjZrc= X-Google-Smtp-Source: ABdhPJwNxbGBBpe9r6A1WHWICesZo5SEMLzazCTOBWAPeAR7cp4CbLSQEB/IDzN4FGMhDmLs415Lrw== X-Received: by 2002:a2e:8815:: with SMTP id x21mr10229829ljh.392.1643128767612; Tue, 25 Jan 2022 08:39:27 -0800 (PST) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id h23sm333898lfv.46.2022.01.25.08.39.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Jan 2022 08:39:23 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: Andrew Morton Cc: linux-mm@kvack.org, LKML , Christoph Hellwig , Matthew Wilcox , Nicholas Piggin , Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH v2 1/1] mm/vmalloc: Move draining areas out of caller context Date: Tue, 25 Jan 2022 17:39:12 +0100 Message-Id: <20220125163912.2809-1-urezki@gmail.com> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 551DF1C0059 X-Stat-Signature: qoit4zje3s6pr3x877zdhcnf6s6owqi4 X-Rspam-User: nil Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=FbkCyN5K; spf=pass (imf21.hostedemail.com: domain of urezki@gmail.com designates 209.85.208.173 as permitted sender) smtp.mailfrom=urezki@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-HE-Tag: 1643128769-404542 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: A caller initiates the drain procces from its context once the drain threshold is reached or passed. There are at least two drawbacks of doing so: a) a caller can be a high-prio or RT task. In that case it can stuck in doing the actual drain of all lazily freed areas. This is not optimal because such tasks usually are latency sensitive where the control should be returned back as soon as possible in order to drive such workloads in time. See 96e2db456135 ("mm/vmalloc: rework the drain logic") b) It is not safe to call vfree() during holding a spinlock due to the vmap_purge_lock mutex. The was a report about this from Zeal Robot here: https://lore.kernel.org/all/20211222081026.484058-1-chi.minghao@zte.com.cn Moving the drain to the separate work context addresses those issues. v1->v2: - Added prefix "_work" to the drain worker function. Signed-off-by: Uladzislau Rezki (Sony) --- mm/vmalloc.c | 35 ++++++++++++++++++++++------------- 1 file changed, 22 insertions(+), 13 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index bdc7222f87d4..e5285c9d2e2a 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -793,6 +793,9 @@ RB_DECLARE_CALLBACKS_MAX(static, free_vmap_area_rb_augment_cb, static void purge_vmap_area_lazy(void); static BLOCKING_NOTIFIER_HEAD(vmap_notify_list); static unsigned long lazy_max_pages(void); +static void drain_vmap_area_work(struct work_struct *work); +static DECLARE_WORK(drain_vmap_work, drain_vmap_area_work); +static atomic_t drain_vmap_work_in_progress; static atomic_long_t nr_vmalloc_pages; @@ -1719,18 +1722,6 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) return true; } -/* - * Kick off a purge of the outstanding lazy areas. Don't bother if somebody - * is already purging. - */ -static void try_purge_vmap_area_lazy(void) -{ - if (mutex_trylock(&vmap_purge_lock)) { - __purge_vmap_area_lazy(ULONG_MAX, 0); - mutex_unlock(&vmap_purge_lock); - } -} - /* * Kick off a purge of the outstanding lazy areas. */ @@ -1742,6 +1733,23 @@ static void purge_vmap_area_lazy(void) mutex_unlock(&vmap_purge_lock); } +static void drain_vmap_area_work(struct work_struct *work) +{ + unsigned long nr_lazy; + + do { + mutex_lock(&vmap_purge_lock); + __purge_vmap_area_lazy(ULONG_MAX, 0); + mutex_unlock(&vmap_purge_lock); + + /* Recheck if further work is required. */ + nr_lazy = atomic_long_read(&vmap_lazy_nr); + } while (nr_lazy > lazy_max_pages()); + + /* We are done at this point. */ + atomic_set(&drain_vmap_work_in_progress, 0); +} + /* * Free a vmap area, caller ensuring that the area has been unmapped * and flush_cache_vunmap had been called for the correct range @@ -1768,7 +1776,8 @@ static void free_vmap_area_noflush(struct vmap_area *va) /* After this point, we may free va at any time */ if (unlikely(nr_lazy > lazy_max_pages())) - try_purge_vmap_area_lazy(); + if (!atomic_xchg(&drain_vmap_work_in_progress, 1)) + schedule_work(&drain_vmap_work); } /*