From patchwork Tue Dec 10 02:39:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 13900694 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AADF4E7717D for ; Tue, 10 Dec 2024 02:39:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3751E6B00DA; Mon, 9 Dec 2024 21:39:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 325796B00DC; Mon, 9 Dec 2024 21:39:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 19F996B00DD; Mon, 9 Dec 2024 21:39:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id EAE6C6B00DA for ; Mon, 9 Dec 2024 21:39:48 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 6B670AD764 for ; Tue, 10 Dec 2024 02:39:48 +0000 (UTC) X-FDA: 82877493216.25.FE3A25A Received: from mail-pj1-f43.google.com (mail-pj1-f43.google.com [209.85.216.43]) by imf27.hostedemail.com (Postfix) with ESMTP id F268540002 for ; Tue, 10 Dec 2024 02:39:21 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=EJWrhLjb; spf=pass (imf27.hostedemail.com: domain of alexei.starovoitov@gmail.com designates 209.85.216.43 as permitted sender) smtp.mailfrom=alexei.starovoitov@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733798364; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=8e35BB82VxfWbGWqr1nA/p1/ofzs8olLXSCBkO0zstU=; b=OMFP/iW3ocdfwDeWJWnbJ21fObIsbnGyRSZ3+xdn8xM4eeMQOIU5/TIOrK4MEbJ/pfOz7t PiEUTSLEo/XzFrFFkzyJ9y6vsf73wmyIWd4MOKGu4KBOxzffVRFuC09LZdr/FGSRePCK4p WRS4KvHQjRTIRrTYSTunlsF1ARqLcc0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733798364; a=rsa-sha256; cv=none; b=70+qOJNGadoR9B6bv3pU+BJELvYVslMh0uMdoCX4/nWvBr0ywxg7ql53zcY1uFuqzyk0jz NMxxqcBoFNGLIGVdfKJMAFXHKkugtKcxyZXXmY3sF7GdlFh+muS7qjVXM3+ub/w7a6YUZb hrzEFJ3njZ6fx2vCxKGXL41fZOilK7U= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=EJWrhLjb; spf=pass (imf27.hostedemail.com: domain of alexei.starovoitov@gmail.com designates 209.85.216.43 as permitted sender) smtp.mailfrom=alexei.starovoitov@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pj1-f43.google.com with SMTP id 98e67ed59e1d1-2ef8c012913so1813775a91.3 for ; Mon, 09 Dec 2024 18:39:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1733798385; x=1734403185; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8e35BB82VxfWbGWqr1nA/p1/ofzs8olLXSCBkO0zstU=; b=EJWrhLjbDRaEUT1AGF+BMOAVnvpR82VWNs2il5WLo6pLkMiDxZcKalcw+86iwl+3ke kSmKGTVbL8zXdimp3Y3ukU9H7JaK910Y5EKrHtNG/Pp1vYGWmggC3iaVdkjHS0SfSBTa tu/lQIMML5N8xEHUnn1Y40QbyqiwEB7xqDCpeLE/GvMizqGjMDHikleGM5wovD7xUEFJ LmGH3O4KZkSPwZvDPYxpCebEjVVeYtnhVJKYeFLX++czr2vtokKJSbQlM0yVa7Y3eu6y uBTQHex4wXR0O5cFfTYtF/PzzXI5zaioZlngE4P8ac4kYRoTPgMYHd8UucNcx9QutDPX EeDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733798385; x=1734403185; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8e35BB82VxfWbGWqr1nA/p1/ofzs8olLXSCBkO0zstU=; b=LHwodUWSGCymWYRgHaz4+Q3Q7rpijLqU+vfA/ofwj6OUn+LrRzTv8t7F6NdLI9KbJt rnAbYsdZVvvxxAmwTNUztD+eXoyEa50XxKilPUZ2LZ2udhsptaLy7S56LzU7pO8qOoO6 2Vhm7ltdmxQhpSdFXNl8VMUX//ubl3BdSg25o9Cb/PepBb34d79Bm0aaHQ6jFW0Zx4dc NpXWgO5RrMBPECda4u/DDG3q4FaWB0G+xCSza5jatQMQBwpWVTBvfkqBIb2O9MJyeqOn bIQe3/FPnt+ilavvisge6m20pK9WL/CrETypBU2k78D+mzKqne1aXWYaJmVlxbOhvaT9 b0bw== X-Forwarded-Encrypted: i=1; AJvYcCXofgKcXJfoONuAVAh8uMxpgFW1xwjUQsEd3Qr0KJf3ybd/W/mofnv02lTa7rtG6byKSgN+M99AGg==@kvack.org X-Gm-Message-State: AOJu0YwgrHc2JYO4Rfm2PXV7rO4ERl1OgmcunHUAEvX2hEShttfdJtou VyKzNjvJ/aU93ZbDJ0GscYMIyJUWxP+asLfzD/Swhzn4oWmRMSKt X-Gm-Gg: ASbGnctHgOTmBWd/agK6bJhHRRAbBxqDa0x6dPT9+ystV8H6sI04vjZDrzTjnZedafd iC5hcuhSWvkySLQ2ELDhR7fThkar4zXD6QUmL1B4b+d3Be4jYZ1L5sbLc0poK+oYaDk9eKsawTT IlisWHHQHY3GN5zxoFsCg1YWIurzMgAKVt6N59Jc7dWwYlbF9s1LZInX2wF784AXD1ueosH5CEl BjQxSpXl5RXf8niowdxCtYo0YzvXTkJ4HW9cEITm1QaiRIRUUtFnadWFCoQ5hroVX2zoq3I6qwp DmlWqQ== X-Google-Smtp-Source: AGHT+IETlDQVakNHy9XhiE9Y3JSlfzGxx7wfgdM3m/HG+ucSQQHFTGfnOzVwILgjB0Zho8MlhPqG4w== X-Received: by 2002:a17:90b:2e43:b0:2ee:acb4:fecd with SMTP id 98e67ed59e1d1-2ef69e16bd4mr21878450a91.9.1733798385246; Mon, 09 Dec 2024 18:39:45 -0800 (PST) Received: from localhost.localdomain ([2620:10d:c090:400::5:83b0]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2ef4600d2c4sm9544043a91.47.2024.12.09.18.39.43 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 09 Dec 2024 18:39:44 -0800 (PST) From: Alexei Starovoitov To: bpf@vger.kernel.org Cc: andrii@kernel.org, memxor@gmail.com, akpm@linux-foundation.org, peterz@infradead.org, vbabka@suse.cz, bigeasy@linutronix.de, rostedt@goodmis.org, houtao1@huawei.com, hannes@cmpxchg.org, shakeel.butt@linux.dev, mhocko@suse.com, willy@infradead.org, tglx@linutronix.de, tj@kernel.org, linux-mm@kvack.org, kernel-team@fb.com Subject: [PATCH bpf-next v2 1/6] mm, bpf: Introduce __GFP_TRYLOCK for opportunistic page allocation Date: Mon, 9 Dec 2024 18:39:31 -0800 Message-Id: <20241210023936.46871-2-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20241210023936.46871-1-alexei.starovoitov@gmail.com> References: <20241210023936.46871-1-alexei.starovoitov@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: F268540002 X-Rspam-User: X-Rspamd-Server: rspam07 X-Stat-Signature: 47stxg5eo487abj9qtahekirhpw6mbz9 X-HE-Tag: 1733798361-522893 X-HE-Meta: U2FsdGVkX18K8gnpXnUCuAHTct64CH97vCzDbMgfQSaBprHcT1yaPaedTuk7lcATFARqDEyfd6x0Qgp1WEQSqMNtq1zt9K+AdT60SzDQdT01AsZRv+hhB+FJMdXJy0lrW06m2NH0qnpm/EKthrpxM/UkqkTB+6r3FjiOyC6RVFVYYAUDYCFYiApotJPOHyDmtxajdgwPRX5NEDObCSjLWywV1hic1O6+8na57NdeMO3nsZNtfgV9AYdHEBDevtqAEk8S8/lomJ//VpIgNGvMvHF/UMT3rF/rgLz2NNQ6q416kHwZ+L1vpZmLdrWg2ibjj5XLa1n3tahhGM4quGc5Jg1ULEpY3+hLeaYobblWy7kcc4O+pBuvsdqwUEh8uQ6c8OrtoF/HBaLR8JrF/vcPWWS2u3UD066k3yNDWZ4ToqjeSL+RWaFCUEIMUNHmsAZ93Dj9HfNSbi0irv9X8dr3Sz0mbmrWEM9KsOG4nqfuHdJUOIsOTOiYbHXSuAQ1XIWsJRfXSWsLRiEWyI68hzmm6SNijgl3zpEBEi5HxCgyd0BTs5BJtH0SJSqoxMfEXv2MZ3AGz+iuXjpEouK45L1T9jsCUIs5h2N6OI7mqSZfnps6Tg9Tm6MaAnCnDbqWMlVP0oKxNv/xpreQwzOyYvJBDjnNysS9Deyyt3Ri7fFkt7j74o3cQHSkWyg/FaEzYNAQe92ByKDbNHZ6E40j4iGRjGgza3wB8bqR15XT88zJgs7Oz0kE1FAuti1n9XqFdWqnEo2eG52Brcj5tfh2zRJJpmE/Lke0HyTV2z2XfKDOK2SS2GFDK3j34/zKcsaDeYnsjWMCShD3pLgSB9xZBan14vYGM8p11eXbKa5Zr0Rr4QEeo6Su9dYF6BVZKDXyHMuAKwljcgPZ8EU3Y36zZJtwfr0yXqD9NpJKcyYSW9DfdXRsxzuHYvEk5J4nDX7QMhmqZ0ridAuC6bRoHLbXwft laOfb1Fy COIjZ+z9ID3VkdvpcLsgqP0K9iTXiEGDca/FELNfDRpgFiRO9GMMVSMAdmMu3yfqfigJKIt6MtapeEGqnO4cwz9NLHlQQCzO46w/VUCNdmXW478VlqwL6xYihIsE8hEQwB1dq7MyR4BaNu+O+XO6o28wIKotzYrxNQ2S+4CS94kDQhwAmmmCua213/6qDzPKqhoeqvRgpZuAMi2r9glYZ7cYzLqyttSSDq/ZcPB2Uq+KOXnsQE5uBmIC30g7oq9NJb+TMZOBbwNH+ICTz74hrKSV8/AZqSEhGrWb5rUDmRHCmxa6A/jN+mrt3fF5srt+IcWJ7LsxHzEGo/rMGX+CzrqEzjrE5cRtUIrfMzHuUwHvFoKtoltWm4t1Y7Pt159T89KApKnkApmTm0dEUdXrES7whBJxBetA0MGyDIarxmwLxcT3PrhSr/+U86/HFOhdRCb3FGdr+rCUF08qUIhHBY40F47/bMFJFWEwtHwIJrr2ZhURIExmPkj2gFWS7/JZ0JHAGxOkK+cqcJwyZoXkUMLod2fM1TBrs/DN+KMBG+Im16BXRkJssl3q12w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Alexei Starovoitov Tracing BPF programs execute from tracepoints and kprobes where running context is unknown, but they need to request additional memory. The prior workarounds were using pre-allocated memory and BPF specific freelists to satisfy such allocation requests. Instead, introduce __GFP_TRYLOCK flag that makes page allocator accessible from any context. It relies on percpu free list of pages that rmqueue_pcplist() should be able to pop the page from. If it fails (due to IRQ re-entrancy or list being empty) then try_alloc_pages() attempts to spin_trylock zone->lock and refill percpu freelist as normal. BPF program may execute with IRQs disabled and zone->lock is sleeping in RT, so trylock is the only option. In theory we can introduce percpu reentrance counter and increment it every time spin_lock_irqsave(&zone->lock, flags) is used, but we cannot rely on it. Even if this cpu is not in page_alloc path the spin_lock_irqsave() is not safe, since BPF prog might be called from tracepoint where preemption is disabled. So trylock only. Note, free_page and memcg are not taught about __GFP_TRYLOCK yet. The support comes in the next patches. This is a first step towards supporting BPF requirements in SLUB and getting rid of bpf_mem_alloc. That goal was discussed at LSFMM: https://lwn.net/Articles/974138/ Signed-off-by: Alexei Starovoitov --- include/linux/gfp.h | 25 +++++++++++++++++++++++++ include/linux/gfp_types.h | 3 +++ include/trace/events/mmflags.h | 1 + mm/fail_page_alloc.c | 6 ++++++ mm/internal.h | 1 + mm/page_alloc.c | 17 ++++++++++++++--- tools/perf/builtin-kmem.c | 1 + 7 files changed, 51 insertions(+), 3 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index b0fe9f62d15b..f68daa9c997b 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -347,6 +347,31 @@ static inline struct page *alloc_page_vma_noprof(gfp_t gfp, } #define alloc_page_vma(...) alloc_hooks(alloc_page_vma_noprof(__VA_ARGS__)) +static inline struct page *try_alloc_pages_noprof(int nid, unsigned int order) +{ + /* + * If spin_locks are not held and interrupts are enabled, use normal + * path. BPF progs run under rcu_read_lock(), so in PREEMPT_RT + * rcu_preempt_depth() will be >= 1 and will use trylock path. + */ + if (preemptible() && !rcu_preempt_depth()) + return alloc_pages_node_noprof(nid, + GFP_NOWAIT | __GFP_ZERO, + order); + /* + * Best effort allocation from percpu free list. + * If it's empty attempt to spin_trylock zone->lock. + * Do not specify __GFP_KSWAPD_RECLAIM to avoid wakeup_kswapd + * that may need to grab a lock. + * Do not specify __GFP_ACCOUNT to avoid local_lock. + * Do not warn either. + */ + return alloc_pages_node_noprof(nid, + __GFP_TRYLOCK | __GFP_NOWARN | __GFP_ZERO, + order); +} +#define try_alloc_pages(...) alloc_hooks(try_alloc_pages_noprof(__VA_ARGS__)) + extern unsigned long get_free_pages_noprof(gfp_t gfp_mask, unsigned int order); #define __get_free_pages(...) alloc_hooks(get_free_pages_noprof(__VA_ARGS__)) diff --git a/include/linux/gfp_types.h b/include/linux/gfp_types.h index 65db9349f905..72b385a7888d 100644 --- a/include/linux/gfp_types.h +++ b/include/linux/gfp_types.h @@ -48,6 +48,7 @@ enum { ___GFP_THISNODE_BIT, ___GFP_ACCOUNT_BIT, ___GFP_ZEROTAGS_BIT, + ___GFP_TRYLOCK_BIT, #ifdef CONFIG_KASAN_HW_TAGS ___GFP_SKIP_ZERO_BIT, ___GFP_SKIP_KASAN_BIT, @@ -86,6 +87,7 @@ enum { #define ___GFP_THISNODE BIT(___GFP_THISNODE_BIT) #define ___GFP_ACCOUNT BIT(___GFP_ACCOUNT_BIT) #define ___GFP_ZEROTAGS BIT(___GFP_ZEROTAGS_BIT) +#define ___GFP_TRYLOCK BIT(___GFP_TRYLOCK_BIT) #ifdef CONFIG_KASAN_HW_TAGS #define ___GFP_SKIP_ZERO BIT(___GFP_SKIP_ZERO_BIT) #define ___GFP_SKIP_KASAN BIT(___GFP_SKIP_KASAN_BIT) @@ -293,6 +295,7 @@ enum { #define __GFP_COMP ((__force gfp_t)___GFP_COMP) #define __GFP_ZERO ((__force gfp_t)___GFP_ZERO) #define __GFP_ZEROTAGS ((__force gfp_t)___GFP_ZEROTAGS) +#define __GFP_TRYLOCK ((__force gfp_t)___GFP_TRYLOCK) #define __GFP_SKIP_ZERO ((__force gfp_t)___GFP_SKIP_ZERO) #define __GFP_SKIP_KASAN ((__force gfp_t)___GFP_SKIP_KASAN) diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h index bb8a59c6caa2..592c93ee5f35 100644 --- a/include/trace/events/mmflags.h +++ b/include/trace/events/mmflags.h @@ -50,6 +50,7 @@ gfpflag_string(__GFP_RECLAIM), \ gfpflag_string(__GFP_DIRECT_RECLAIM), \ gfpflag_string(__GFP_KSWAPD_RECLAIM), \ + gfpflag_string(__GFP_TRYLOCK), \ gfpflag_string(__GFP_ZEROTAGS) #ifdef CONFIG_KASAN_HW_TAGS diff --git a/mm/fail_page_alloc.c b/mm/fail_page_alloc.c index 7647096170e9..b3b297d67909 100644 --- a/mm/fail_page_alloc.c +++ b/mm/fail_page_alloc.c @@ -31,6 +31,12 @@ bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order) return false; if (gfp_mask & __GFP_NOFAIL) return false; + if (gfp_mask & __GFP_TRYLOCK) + /* + * Internals of should_fail_ex() are not compatible + * with trylock concept. + */ + return false; if (fail_page_alloc.ignore_gfp_highmem && (gfp_mask & __GFP_HIGHMEM)) return false; if (fail_page_alloc.ignore_gfp_reclaim && diff --git a/mm/internal.h b/mm/internal.h index cb8d8e8e3ffa..c082b8fa1d71 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1175,6 +1175,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, #endif #define ALLOC_HIGHATOMIC 0x200 /* Allows access to MIGRATE_HIGHATOMIC */ #define ALLOC_KSWAPD 0x800 /* allow waking of kswapd, __GFP_KSWAPD_RECLAIM set */ +#define ALLOC_TRYLOCK 0x1000000 /* Only use spin_trylock in allocation path */ /* Flags that allow allocations below the min watermark. */ #define ALLOC_RESERVES (ALLOC_NON_BLOCK|ALLOC_MIN_RESERVE|ALLOC_HIGHATOMIC|ALLOC_OOM) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1cb4b8c8886d..d511e68903c6 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2304,7 +2304,11 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order, unsigned long flags; int i; - spin_lock_irqsave(&zone->lock, flags); + if (!spin_trylock_irqsave(&zone->lock, flags)) { + if (unlikely(alloc_flags & ALLOC_TRYLOCK)) + return 0; + spin_lock_irqsave(&zone->lock, flags); + } for (i = 0; i < count; ++i) { struct page *page = __rmqueue(zone, order, migratetype, alloc_flags); @@ -2904,7 +2908,11 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone, do { page = NULL; - spin_lock_irqsave(&zone->lock, flags); + if (!spin_trylock_irqsave(&zone->lock, flags)){ + if (unlikely(alloc_flags & ALLOC_TRYLOCK)) + return NULL; + spin_lock_irqsave(&zone->lock, flags); + } if (alloc_flags & ALLOC_HIGHATOMIC) page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); if (!page) { @@ -4001,6 +4009,7 @@ gfp_to_alloc_flags(gfp_t gfp_mask, unsigned int order) */ BUILD_BUG_ON(__GFP_HIGH != (__force gfp_t) ALLOC_MIN_RESERVE); BUILD_BUG_ON(__GFP_KSWAPD_RECLAIM != (__force gfp_t) ALLOC_KSWAPD); + BUILD_BUG_ON(__GFP_TRYLOCK != (__force gfp_t) ALLOC_TRYLOCK); /* * The caller may dip into page reserves a bit more if the caller @@ -4009,7 +4018,7 @@ gfp_to_alloc_flags(gfp_t gfp_mask, unsigned int order) * set both ALLOC_NON_BLOCK and ALLOC_MIN_RESERVE(__GFP_HIGH). */ alloc_flags |= (__force int) - (gfp_mask & (__GFP_HIGH | __GFP_KSWAPD_RECLAIM)); + (gfp_mask & (__GFP_HIGH | __GFP_KSWAPD_RECLAIM | __GFP_TRYLOCK)); if (!(gfp_mask & __GFP_DIRECT_RECLAIM)) { /* @@ -4509,6 +4518,8 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order, might_alloc(gfp_mask); + *alloc_flags |= (__force int) (gfp_mask & __GFP_TRYLOCK); + if (should_fail_alloc_page(gfp_mask, order)) return false; diff --git a/tools/perf/builtin-kmem.c b/tools/perf/builtin-kmem.c index 4d8d94146f8d..1f7f4269fa10 100644 --- a/tools/perf/builtin-kmem.c +++ b/tools/perf/builtin-kmem.c @@ -682,6 +682,7 @@ static const struct { { "__GFP_RECLAIM", "R" }, { "__GFP_DIRECT_RECLAIM", "DR" }, { "__GFP_KSWAPD_RECLAIM", "KR" }, + { "__GFP_TRYLOCK", "TL" }, }; static size_t max_gfp_len;