From patchwork Thu Dec 30 19:36:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 12701688 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 613FFC4332F for ; Thu, 30 Dec 2021 19:36:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A7E876B00C7; Thu, 30 Dec 2021 14:36:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A06736B00C9; Thu, 30 Dec 2021 14:36:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 87FB66B00CA; Thu, 30 Dec 2021 14:36:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0121.hostedemail.com [216.40.44.121]) by kanga.kvack.org (Postfix) with ESMTP id 7422D6B00C7 for ; Thu, 30 Dec 2021 14:36:33 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 33BEC180ED79F for ; Thu, 30 Dec 2021 19:36:33 +0000 (UTC) X-FDA: 78975467466.04.3E29600 Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) by imf09.hostedemail.com (Postfix) with ESMTP id E4D5A140006 for ; Thu, 30 Dec 2021 19:36:24 +0000 (UTC) Received: by mail-pl1-f172.google.com with SMTP id c3so5643020pls.5 for ; Thu, 30 Dec 2021 11:36:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=nTimrqJ0iSOeOCBeCp2dVOn4Z6+3smTXtkCg61LoSFk=; b=RLgfsULKZj57FJGDNmj159fZ3EbDc4x7rwseTpv7JrvO6Luy1T6WU3VFvUqy3dbqf6 o52WcHWh36VzFIZxlcR6zu3PUUzDXqAGGOcNlkqdWJAXYIhRw2+K0qQgKtNeMYlSiB7/ r1y4u/mY1hFLqO3iSrx/cvgrpUh8el/TVivAuQD+kqbteIC7xnJeeSBwvwi6HZ6Z/GN4 50H9eNbiw8qqhCTWP95RdZVdHpTYovMTpPHu2mI4SgMthNpfC3pOTeX1fSuUmMduH/bZ lWiyGlfIcuFnd3vwprEr8Xe4hsEK9NUDLqrxjdRmPTswMXPej8zd2JEZ5vG1+tmkkWDT gGXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :mime-version:content-transfer-encoding; bh=nTimrqJ0iSOeOCBeCp2dVOn4Z6+3smTXtkCg61LoSFk=; b=4nyUo7dfEunnJnHeRbl3N0V+UrK+1Jxu8ppIZf5PJbZIpI/Hz97TlBWjALZ76or0sN yCy/+ab5DuoAApqHEkSuhl4BOoSpQ+98gaAraqb8ULjI5lXNj8f0+R7i9dP5u9ZVBOFL NJS6EOjoYTJh0yMVjBRAD/Ll9qDvLVosSUSfK836i+jJszf/4gG444JsKOVHZx3mlkPW A8b350lG0HBdgOGot/jWJv8A7BKw5nnY1llKkQKDHjZwYMmdqtBd6KBmn0lPbPFXl6Mv 5ewxuXzP8We656NXUrFpcjZYQXvsmNolO3/igBCEf3kSZXgGzRAhrOzDZs3Pr0Bj4ngw JGQA== X-Gm-Message-State: AOAM533jyqu4fz1TWW+qMMOC5N0UFa2R7uRtAUyFfLZEsD/VCVJhIvqi fsLPhiYqIys2hyLhs9oMqwk= X-Google-Smtp-Source: ABdhPJw/TMaPAZQ1ZO220qQg8D8nNYpA+WRNckwrlfl6BdEn2IXDJxxvT1QX5TDjZmBVvbf5eqPeKQ== X-Received: by 2002:a17:90a:bf18:: with SMTP id c24mr39155085pjs.52.1640892990993; Thu, 30 Dec 2021 11:36:30 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:d755:90e5:8d6a:e8eb]) by smtp.gmail.com with ESMTPSA id t21sm23255266pgn.28.2021.12.30.11.36.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Dec 2021 11:36:30 -0800 (PST) From: Minchan Kim To: Andrew Morton Cc: Michal Hocko , David Hildenbrand , linux-mm , LKML , Suren Baghdasaryan , John Dias , Minchan Kim Subject: [RESEND][PATCH v2] mm: don't call lru draining in the nested lru_cache_disable Date: Thu, 30 Dec 2021 11:36:27 -0800 Message-Id: <20211230193627.495145-1-minchan@kernel.org> X-Mailer: git-send-email 2.34.1.448.ga2b2bfdf31-goog MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: E4D5A140006 X-Stat-Signature: ncj9m8z8hindueqe6n4m7rr3xf4exb7u Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=RLgfsULK; spf=pass (imf09.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.214.172 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none) X-HE-Tag: 1640892984-287380 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: lru_cache_disable involves IPIs to drain pagevec of each core, which sometimes takes quite long time to complete depending on cpu's business, which makes allocation too slow up to sveral hundredth milliseconds. Furthermore, the repeated draining in the alloc_contig_range makes thing worse considering caller of alloc_contig_range usually tries multiple times in the loop. This patch makes the lru_cache_disable aware of the fact the pagevec was already disabled. With that, user of alloc_contig_range can disable the lru cache in advance in their context during the repeated trial so they can avoid the multiple costly draining in cma allocation. Signed-off-by: Minchan Kim --- * from v1 - https://lore.kernel.org/lkml/20211206221006.946661-1-minchan@kernel.org/ * fix lru_cache_disable race - akpm include/linux/swap.h | 14 ++------------ mm/cma.c | 5 +++++ mm/swap.c | 30 ++++++++++++++++++++++++++++-- 3 files changed, 35 insertions(+), 14 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index ba52f3a3478e..fe18e86a4f13 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -348,19 +348,9 @@ extern void lru_note_cost_page(struct page *); extern void lru_cache_add(struct page *); extern void mark_page_accessed(struct page *); -extern atomic_t lru_disable_count; - -static inline bool lru_cache_disabled(void) -{ - return atomic_read(&lru_disable_count); -} - -static inline void lru_cache_enable(void) -{ - atomic_dec(&lru_disable_count); -} - +extern bool lru_cache_disabled(void); extern void lru_cache_disable(void); +extern void lru_cache_enable(void); extern void lru_add_drain(void); extern void lru_add_drain_cpu(int cpu); extern void lru_add_drain_cpu_zone(struct zone *zone); diff --git a/mm/cma.c b/mm/cma.c index 995e15480937..60be555c5b95 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -30,6 +30,7 @@ #include #include #include +#include #include #include @@ -453,6 +454,8 @@ struct page *cma_alloc(struct cma *cma, unsigned long count, if (bitmap_count > bitmap_maxno) goto out; + lru_cache_disable(); + for (;;) { spin_lock_irq(&cma->lock); bitmap_no = bitmap_find_next_zero_area_off(cma->bitmap, @@ -492,6 +495,8 @@ struct page *cma_alloc(struct cma *cma, unsigned long count, start = bitmap_no + mask + 1; } + lru_cache_enable(); + trace_cma_alloc_finish(cma->name, pfn, page, count, align); /* diff --git a/mm/swap.c b/mm/swap.c index af3cad4e5378..5f89d7c9a54e 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -847,7 +847,17 @@ void lru_add_drain_all(void) } #endif /* CONFIG_SMP */ -atomic_t lru_disable_count = ATOMIC_INIT(0); +static atomic_t lru_disable_count = ATOMIC_INIT(0); + +bool lru_cache_disabled(void) +{ + return atomic_read(&lru_disable_count) != 0; +} + +void lru_cache_enable(void) +{ + atomic_dec(&lru_disable_count); +} /* * lru_cache_disable() needs to be called before we start compiling @@ -859,7 +869,21 @@ atomic_t lru_disable_count = ATOMIC_INIT(0); */ void lru_cache_disable(void) { - atomic_inc(&lru_disable_count); + static DEFINE_MUTEX(lock); + + /* + * The lock gaurantees lru_cache is drained when the function + * returned. + */ + mutex_lock(&lock); + /* + * If someone is already disabled lru_cache, just return with + * increasing the lru_disable_count. + */ + if (atomic_inc_not_zero(&lru_disable_count)) { + mutex_unlock(&lock); + return; + } #ifdef CONFIG_SMP /* * lru_add_drain_all in the force mode will schedule draining on @@ -873,6 +897,8 @@ void lru_cache_disable(void) #else lru_add_and_bh_lrus_drain(); #endif + atomic_inc(&lru_disable_count); + mutex_unlock(&lock); } /**