From patchwork Wed Jan 19 14:35:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uladzislau Rezki X-Patchwork-Id: 12717584 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0CE91C433EF for ; Wed, 19 Jan 2022 14:35:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 59B186B0071; Wed, 19 Jan 2022 09:35:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 549CC6B0073; Wed, 19 Jan 2022 09:35:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 411E66B0074; Wed, 19 Jan 2022 09:35:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0206.hostedemail.com [216.40.44.206]) by kanga.kvack.org (Postfix) with ESMTP id 30E7A6B0071 for ; Wed, 19 Jan 2022 09:35:52 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id D240696769 for ; Wed, 19 Jan 2022 14:35:51 +0000 (UTC) X-FDA: 79047285702.26.231032B Received: from mail-lf1-f49.google.com (mail-lf1-f49.google.com [209.85.167.49]) by imf07.hostedemail.com (Postfix) with ESMTP id 2E9BC40009 for ; Wed, 19 Jan 2022 14:35:51 +0000 (UTC) Received: by mail-lf1-f49.google.com with SMTP id b14so10243682lff.3 for ; Wed, 19 Jan 2022 06:35:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=2gtcKcgtvmZFmg0VIhIPJVczT0Y4Ko4N7yPLo6FJsaU=; b=lzZKq0uGA2c6dIndnc5wdhXHfoLCK5xKfp7ly5jNT707FGENvzPiJH/ewDeGhdVa4b JxnKFpFUPub/KjBvWghZnhI3DUxcyKZhpUvLtdKjxHLH91XCaNmP5isZyXDGmjumuH6B DZdNUZKHKC6kx5AHZwio+dQZTJllETvQaumz5FoyqmFpmX8i7y4BEGqp1Gm/uKsD9e// wxvTGwVOE8IijpHKBsNBzobvUDmE0JNV7McdWTIbLBjS5gSnvsu4eHwcgWMJ8R0nS2T4 DuUfThUpIUEETRsosEmwM/ETJeHL+TkP/uLOsKsYVK5BjEamBjyNcmaq4Ov1jM2abrhh kKIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=2gtcKcgtvmZFmg0VIhIPJVczT0Y4Ko4N7yPLo6FJsaU=; b=f04uJiJSQOf07ybqEyg+/qMCVs1xLdbRblcssrdIpKLOi0OoOaK05iEeRFrubObm0l DZ03wVltcEhIbwgcJuNh38VnwBRJVhUdnthBFt/H6AjB6GAZUMinUQxZLwCoj8OIKLhp BufL/2n/3FneGo5SbT4XonoDXJ7BZP442kHJRF5H34hPUwrZ12+3AchfVyRdqsTNtUnZ ljwezeIiaLdVZy9K2A9ze5GQSWKDKH+q9p7krC5g/uSxcsVwr2ehZynVa2ZhSl/3l+iF s9KQkcX3m3zshNpqvTpUDu+ObwGg+iCZJHaJoY6usk/CcXvsWHZMAu+7nFgILVvxIdcC QDww== X-Gm-Message-State: AOAM533KICu7lI7WIqQRUzdrhDEm1Kv/MU+Uo2OcZu9H9L6ov4dZx9Ou KzZPU34Pr7F5d3WrZpd12qTqGa+ILLzeQw== X-Google-Smtp-Source: ABdhPJw5YZzp+Q846wPrtgIOdcgHDx7WVgT8XVUuQKHegy1zOKH6J7u/8aiSSyZp4jd/+1ihhYC3UA== X-Received: by 2002:a2e:96c5:: with SMTP id d5mr3490755ljj.527.1642602949660; Wed, 19 Jan 2022 06:35:49 -0800 (PST) Received: from pc638.lan ([155.137.26.201]) by smtp.gmail.com with ESMTPSA id z13sm179943lfr.183.2022.01.19.06.35.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Jan 2022 06:35:49 -0800 (PST) From: "Uladzislau Rezki (Sony)" To: Andrew Morton Cc: linux-mm@kvack.org, LKML , Christoph Hellwig , Matthew Wilcox , Nicholas Piggin , Uladzislau Rezki , Oleksiy Avramchenko Subject: [PATCH 1/3] mm/vmalloc: Move draining areas out of caller context Date: Wed, 19 Jan 2022 15:35:38 +0100 Message-Id: <20220119143540.601149-1-urezki@gmail.com> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 2E9BC40009 X-Stat-Signature: tmosk9ruhfsdwfw1csbgefjjdmmrwar9 Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=lzZKq0uG; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf07.hostedemail.com: domain of urezki@gmail.com designates 209.85.167.49 as permitted sender) smtp.mailfrom=urezki@gmail.com X-HE-Tag: 1642602951-728098 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: A caller initiates the drain procces from its context once the drain threshold is reached or passed. There are at least two drawbacks of doing so: a) a caller can be a high-prio or RT task. In that case it can stuck in doing the actual drain of all lazily freed areas. This is not optimal because such tasks usually are latency sensitive where the control should be returned back as soon as possible in order to drive such workloads in time. See 96e2db456135 ("mm/vmalloc: rework the drain logic") b) It is not safe to call vfree() during holding a spinlock due to the vmap_purge_lock mutex. The was a report about this from Zeal Robot here: https://lore.kernel.org/all/20211222081026.484058-1-chi.minghao@zte.com.cn Moving the drain to the separate work context addresses those issues. Signed-off-by: Uladzislau Rezki (Sony) --- mm/vmalloc.c | 35 ++++++++++++++++++++++------------- 1 file changed, 22 insertions(+), 13 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index bdc7222f87d4..ed0f9eaa61a9 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -793,6 +793,9 @@ RB_DECLARE_CALLBACKS_MAX(static, free_vmap_area_rb_augment_cb, static void purge_vmap_area_lazy(void); static BLOCKING_NOTIFIER_HEAD(vmap_notify_list); static unsigned long lazy_max_pages(void); +static void drain_vmap_area(struct work_struct *work); +static DECLARE_WORK(drain_vmap_area_work, drain_vmap_area); +static atomic_t drain_vmap_area_work_in_progress; static atomic_long_t nr_vmalloc_pages; @@ -1719,18 +1722,6 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) return true; } -/* - * Kick off a purge of the outstanding lazy areas. Don't bother if somebody - * is already purging. - */ -static void try_purge_vmap_area_lazy(void) -{ - if (mutex_trylock(&vmap_purge_lock)) { - __purge_vmap_area_lazy(ULONG_MAX, 0); - mutex_unlock(&vmap_purge_lock); - } -} - /* * Kick off a purge of the outstanding lazy areas. */ @@ -1742,6 +1733,23 @@ static void purge_vmap_area_lazy(void) mutex_unlock(&vmap_purge_lock); } +static void drain_vmap_area(struct work_struct *work) +{ + unsigned long nr_lazy; + + do { + mutex_lock(&vmap_purge_lock); + __purge_vmap_area_lazy(ULONG_MAX, 0); + mutex_unlock(&vmap_purge_lock); + + /* Recheck if further work is required. */ + nr_lazy = atomic_long_read(&vmap_lazy_nr); + } while (nr_lazy > lazy_max_pages()); + + /* We are done at this point. */ + atomic_set(&drain_vmap_area_work_in_progress, 0); +} + /* * Free a vmap area, caller ensuring that the area has been unmapped * and flush_cache_vunmap had been called for the correct range @@ -1768,7 +1776,8 @@ static void free_vmap_area_noflush(struct vmap_area *va) /* After this point, we may free va at any time */ if (unlikely(nr_lazy > lazy_max_pages())) - try_purge_vmap_area_lazy(); + if (!atomic_xchg(&drain_vmap_area_work_in_progress, 1)) + schedule_work(&drain_vmap_area_work); } /*