From patchwork Wed Dec 18 03:07:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 13912974 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F7E7E77187 for ; Wed, 18 Dec 2024 03:07:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7D9486B0085; Tue, 17 Dec 2024 22:07:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 78AD16B0088; Tue, 17 Dec 2024 22:07:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 62A146B0089; Tue, 17 Dec 2024 22:07:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 446916B0085 for ; Tue, 17 Dec 2024 22:07:33 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id CA4BE120AFB for ; Wed, 18 Dec 2024 03:07:32 +0000 (UTC) X-FDA: 82906593336.21.91B9FAC Received: from mail-ot1-f43.google.com (mail-ot1-f43.google.com [209.85.210.43]) by imf18.hostedemail.com (Postfix) with ESMTP id BB9311C0014 for ; Wed, 18 Dec 2024 03:07:16 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=NIEshg0A; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf18.hostedemail.com: domain of alexei.starovoitov@gmail.com designates 209.85.210.43 as permitted sender) smtp.mailfrom=alexei.starovoitov@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734491236; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OGzoMP4pz94E3iZ2ZVAY+FjoyH+9uJ8+ltaU5mTC/cc=; b=7i5VDMoy7kQDhm6uZ9qkaYgOok5NICPazYVIYbAb1cADylX1jZrCkLEgr4M1sAvm12hoiv NyDVXaBJ6nk9GafdUz0fVtrR7cVjMuIAp+NWD3F3QXYzdADb9xwHtGsEbSxqS9X68TbDfc zsJjxeymnn5UG6KaWmqx7aI4CXfco74= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734491236; a=rsa-sha256; cv=none; b=fcRfGoylR2T4ipW0ADaKkIJ527ufQmA3ixX8TZ2EqDoF3vscfG5IUmfCCmeodJ06vAScYN ccHLiNsNRVigxB+PaGBzMlRP6FEo/z6e+T+n3F2fma3nh3AqNYWjs7zIV7FkfofnPVklVy H+h2I8n3afcuvFFuGXGicsC9+m42Zz0= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=NIEshg0A; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf18.hostedemail.com: domain of alexei.starovoitov@gmail.com designates 209.85.210.43 as permitted sender) smtp.mailfrom=alexei.starovoitov@gmail.com Received: by mail-ot1-f43.google.com with SMTP id 46e09a7af769-71e2aa8d5e3so3072766a34.2 for ; Tue, 17 Dec 2024 19:07:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1734491250; x=1735096050; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=OGzoMP4pz94E3iZ2ZVAY+FjoyH+9uJ8+ltaU5mTC/cc=; b=NIEshg0AIj6q6XKHDpgFfCECDQ10hk/XZ0hbzfdxMOl6gCvAXDde6Z8JHvyBC8uJjR FUU0zCRU3OnrIG1udFUsMemDSBfMwKXeDZ0CAN/3mUA5CVeeuYhLbVyRQkI+8Q0qriu9 8s56sjOu6d2IP+YIoYj3fMzLePB6DAx94xzGQ+QqTpZCemDaKHyqfEAATe54d0OAH9W3 dbnFTpy2mK1XmX76NeVGVFdb3MtPnPHz9P9yBBDpykMmW492RU3yo4GCGIZ+k99gt7eI 0yCnXitFoVPsO4mWgt41V+VR9/jUHZBzmX7Vste7wRl5aeT+wpClpzzHsBbZ2UbA+RV6 atCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734491250; x=1735096050; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OGzoMP4pz94E3iZ2ZVAY+FjoyH+9uJ8+ltaU5mTC/cc=; b=AAbcwEV9JUaWFOrAHb1lDUkG87hpzkfUXuU5PzphSUvD7g1BzA5IquW05ElWZD65uO hj6O5r8ccJrgcAFI5fKGV6TiiwAvnj0r9/qEIYC/h7heCOVjJxMvUykaxtfCbwKemJ2K LcRQejub0cxw4kpQPTFS3PhXnB6DD78kWOelL7yBnI23Kt/TRBFdeQaPpos7xBYjhEaI smo/TAJWVoeGhPbouGwY+dDZurwVk+gJj5MCTrN0WdRv918PfkZ3GHUVGYAFm3ujQ9aI is1nF/ixjkp9UwFtGJPIZU4oRCA3ZeAls7DlbebFvODkYa2mPXALRh9eJe2UiiT0t4D0 wKGw== X-Forwarded-Encrypted: i=1; AJvYcCX08ykVl9Oz4Ky6NdMvevIco4cS9wEXX9/XZ7qhrX5pBKlh6iCF8acE85RelEGryN4qUdVOUTqmlQ==@kvack.org X-Gm-Message-State: AOJu0YxfmxBTZ+NTPvB5mAYOY8aFDfP4tFcrjEcZuC9+pH0gIN0/6JmQ IPk4PMYJqM/Zz/CTAtk36OAvsZbMFV8pFZS6g9Z3s4YJSP1aCJuE X-Gm-Gg: ASbGnct44PnMZ04TVvSgatW62rM94cJzbLgFybs3X3e8awQGf9INkQUuKbUuG67T5Pt Be4ONioyuVpdAw51QR6huzxgLYp8K/SKGGGW+WAnntsRaNfNKHol84FvSRvSxYaeAmVvDwJWKtr fBQD8ZJMxUkRTlF1oe9o3nKPDmgAfTglrGZaNRHxjsaiyXLdLLiVKU3LezI8zvesYM710+qdWLh fCvbrfKIixzSbbOBYx8vlW4E2DN/A4C/43CrdDDL5zEx9ZmvjuRrnDjnK648w== X-Google-Smtp-Source: AGHT+IHS1I5BEwtWaQWxY8nR9/3Gb5s3FBDzligOvFHMM+1wBgrv8C56AV9pKBLPsNZs5Vq+R7EbDg== X-Received: by 2002:a05:6830:4391:b0:71d:eee3:fd26 with SMTP id 46e09a7af769-71fb75611a4mr755835a34.4.1734491249808; Tue, 17 Dec 2024 19:07:29 -0800 (PST) Received: from localhost ([2a03:2880:12ff:4::]) by smtp.gmail.com with ESMTPSA id 46e09a7af769-71e48307e07sm2482972a34.1.2024.12.17.19.07.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Dec 2024 19:07:28 -0800 (PST) From: alexei.starovoitov@gmail.com To: bpf@vger.kernel.org Cc: andrii@kernel.org, memxor@gmail.com, akpm@linux-foundation.org, peterz@infradead.org, vbabka@suse.cz, bigeasy@linutronix.de, rostedt@goodmis.org, houtao1@huawei.com, hannes@cmpxchg.org, shakeel.butt@linux.dev, mhocko@suse.com, willy@infradead.org, tglx@linutronix.de, jannh@google.com, tj@kernel.org, linux-mm@kvack.org, kernel-team@fb.com Subject: [PATCH bpf-next v3 1/6] mm, bpf: Introduce try_alloc_pages() for opportunistic page allocation Date: Tue, 17 Dec 2024 19:07:14 -0800 Message-ID: <20241218030720.1602449-2-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241218030720.1602449-1-alexei.starovoitov@gmail.com> References: <20241218030720.1602449-1-alexei.starovoitov@gmail.com> MIME-Version: 1.0 X-Stat-Signature: p76a3ygt4dp8hunumbacaopt3iy5kxnn X-Rspamd-Queue-Id: BB9311C0014 X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1734491236-222264 X-HE-Meta: U2FsdGVkX19PH33FuuaoGZyKJid3y1YXpKTlMHJ/zKi88+aXHI3JkqfAEgPmFqjagdtFhJ5C4UDSYKOHgCJvS6bMLgbOHpa6GPQKhFSTQKZOGs6Fgdb7G4pOQN3ELbZnyp+IU+WFzazyROg8f9JhEOvtUgGoiwzE6HqU/x6syK9/w1dE/5ykRRsi8YM8jAkFUuHsiHDeIgeC2E8c18vQBQDrtktPgDL41Gv3C9SZODWyVVcg0x7+kdjzZUZgdhWOkE7llazMOB3cg1BoVez8oDP3aYPFlHJrjFWRbA5LQcbJtpxInzTMvV2uvNYFx5uajF9Q4U0zT8iF5nWmfVlJUW2zWRM/uHrF+Z8bOZOUjUUS/W2qOiyU0+S/boq+P88o9HJCv/r+0gj3xRA3jxrmMxo5RB+xGQIKTSAxerhxVXZp0XDNh9Fue1U+FVITlqxZDIvpZhqKLJHXM9CTEdPXS8lnBuPDB3Pry0Nb2zM7Fn5xmJE/c0UKChTYl/YmJqkhCUgJ+8bBlOvV8V2L7mS/sdqKwOxa1guwmo9DfHRS0Sub6XHSd4RsBbgkwcbEfbwFtSqUMkGsPD6Sk2Sap0tXtx3ytQqQYeAhLH1njYZdWA6K56UqUszZR5PjoB8Qoomn+BTQDUSTwdv88cmlJPdzR0eUTLacbjAJ0+gycJFiJzQIQ1b5Ccq9snMKkm/0aS6UVVi6p8G+mAUDaOCSmYWMC4uZk0L418ahg0/IEyFZTKI+79yJn96v4LgmvAkxI8POtSODgOnO3hwNqw1SJMsVkFJOREzl3J3tNcu861et7/Zlql327tfPfhmhhMXhm1XBU2y6nzyBsG1AZZJv89DHrcsk9gNMLJIT5rAQJ9vCs0jBIcXnpujayHKU4Mku5ghLBWACU/ISjV0mv9koChMFgY/0GKpr89v+GS08fSV+tZPTGOQfhW7rJFKBAnE9dCrx//VY4ox115i/sRALFsa VVFgCWTs Ht3L1/5EJKRNpVSujM3ZGxdSZKwcm1qrbBGG4174d+CM9iQ2+90bmlkTu5eCnYFwlp8In885EfkdRW9yCvxgFCFE2wIxvdLMz359zpYxTIecJBfOOQwOUov7+qKsT+8WoFyv+S24rYaKbc3046tXI8Urh7ukkQP9OIpGr3HX3WdIPwh+9ejc1Y55ZZquVSdrcA8hV1wOrpUz6Asalvr0FWXYeeZMu4d7PZH8yMPJPz1nYMY+op3g1gTzqWfXBQQFd3zE/lZ0zL+rqQtFfOzGOcdi/l0Ay9FBHyzCzL4BmyHel2qrmMMYwWebb+4xVt4CiAwJ7IvKiFkSDpC18qR3T9ykgFPHdRVIdFqnTC3h50ildEllf12NGZRFjNts3/46/84k8C5N721FAoLcP9NPYwCePGivq2sfvlY7hCbjd19O2u5cl332GF/ojBO1GLXDufSYRV8+UISui7YD1F+rh1MoyO/bHrqEXPq118oEdg1SQWpiAs2WRFT4FV6mIaGrunOArFsq3+thEDe9Ks0ZHPmXXlxZxmgQuhGeaE+AMN3s7YpYemsS56ejoPQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Alexei Starovoitov Tracing BPF programs execute from tracepoints and kprobes where running context is unknown, but they need to request additional memory. The prior workarounds were using pre-allocated memory and BPF specific freelists to satisfy such allocation requests. Instead, introduce internal __GFP_TRYLOCK flag that makes page allocator accessible from any context. It relies on percpu free list of pages that rmqueue_pcplist() should be able to pop the page from. If it fails (due to IRQ re-entrancy or list being empty) then try_alloc_pages() attempts to spin_trylock zone->lock and refill percpu freelist as normal. BPF program may execute with IRQs disabled and zone->lock is sleeping in RT, so trylock is the only option. In theory we can introduce percpu reentrance counter and increment it every time spin_lock_irqsave(&zone->lock, flags) is used, but we cannot rely on it. Even if this cpu is not in page_alloc path the spin_lock_irqsave() is not safe, since BPF prog might be called from tracepoint where preemption is disabled. So trylock only. Note, free_page and memcg are not taught about __GFP_TRYLOCK yet. The support comes in the next patches. This is a first step towards supporting BPF requirements in SLUB and getting rid of bpf_mem_alloc. That goal was discussed at LSFMM: https://lwn.net/Articles/974138/ Signed-off-by: Alexei Starovoitov --- include/linux/gfp.h | 3 ++ include/linux/gfp_types.h | 1 + mm/internal.h | 2 ++ mm/page_alloc.c | 69 ++++++++++++++++++++++++++++++++++++--- 4 files changed, 71 insertions(+), 4 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index b0fe9f62d15b..65b8df1db26a 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -347,6 +347,9 @@ static inline struct page *alloc_page_vma_noprof(gfp_t gfp, } #define alloc_page_vma(...) alloc_hooks(alloc_page_vma_noprof(__VA_ARGS__)) +struct page *try_alloc_pages_noprof(int nid, unsigned int order); +#define try_alloc_pages(...) alloc_hooks(try_alloc_pages_noprof(__VA_ARGS__)) + extern unsigned long get_free_pages_noprof(gfp_t gfp_mask, unsigned int order); #define __get_free_pages(...) alloc_hooks(get_free_pages_noprof(__VA_ARGS__)) diff --git a/include/linux/gfp_types.h b/include/linux/gfp_types.h index 65db9349f905..65b148ec86eb 100644 --- a/include/linux/gfp_types.h +++ b/include/linux/gfp_types.h @@ -48,6 +48,7 @@ enum { ___GFP_THISNODE_BIT, ___GFP_ACCOUNT_BIT, ___GFP_ZEROTAGS_BIT, + ___GFP_TRYLOCK_BIT, #ifdef CONFIG_KASAN_HW_TAGS ___GFP_SKIP_ZERO_BIT, ___GFP_SKIP_KASAN_BIT, diff --git a/mm/internal.h b/mm/internal.h index cb8d8e8e3ffa..122fce7e1a9e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1175,6 +1175,8 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, #endif #define ALLOC_HIGHATOMIC 0x200 /* Allows access to MIGRATE_HIGHATOMIC */ #define ALLOC_KSWAPD 0x800 /* allow waking of kswapd, __GFP_KSWAPD_RECLAIM set */ +#define __GFP_TRYLOCK ((__force gfp_t)BIT(___GFP_TRYLOCK_BIT)) +#define ALLOC_TRYLOCK 0x1000000 /* Only use spin_trylock in allocation path */ /* Flags that allow allocations below the min watermark. */ #define ALLOC_RESERVES (ALLOC_NON_BLOCK|ALLOC_MIN_RESERVE|ALLOC_HIGHATOMIC|ALLOC_OOM) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1cb4b8c8886d..d23545057b6e 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2304,7 +2304,11 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order, unsigned long flags; int i; - spin_lock_irqsave(&zone->lock, flags); + if (!spin_trylock_irqsave(&zone->lock, flags)) { + if (unlikely(alloc_flags & ALLOC_TRYLOCK)) + return 0; + spin_lock_irqsave(&zone->lock, flags); + } for (i = 0; i < count; ++i) { struct page *page = __rmqueue(zone, order, migratetype, alloc_flags); @@ -2904,7 +2908,11 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone, do { page = NULL; - spin_lock_irqsave(&zone->lock, flags); + if (!spin_trylock_irqsave(&zone->lock, flags)) { + if (unlikely(alloc_flags & ALLOC_TRYLOCK)) + return NULL; + spin_lock_irqsave(&zone->lock, flags); + } if (alloc_flags & ALLOC_HIGHATOMIC) page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); if (!page) { @@ -4001,6 +4009,7 @@ gfp_to_alloc_flags(gfp_t gfp_mask, unsigned int order) */ BUILD_BUG_ON(__GFP_HIGH != (__force gfp_t) ALLOC_MIN_RESERVE); BUILD_BUG_ON(__GFP_KSWAPD_RECLAIM != (__force gfp_t) ALLOC_KSWAPD); + BUILD_BUG_ON(__GFP_TRYLOCK != (__force gfp_t) ALLOC_TRYLOCK); /* * The caller may dip into page reserves a bit more if the caller @@ -4009,7 +4018,7 @@ gfp_to_alloc_flags(gfp_t gfp_mask, unsigned int order) * set both ALLOC_NON_BLOCK and ALLOC_MIN_RESERVE(__GFP_HIGH). */ alloc_flags |= (__force int) - (gfp_mask & (__GFP_HIGH | __GFP_KSWAPD_RECLAIM)); + (gfp_mask & (__GFP_HIGH | __GFP_KSWAPD_RECLAIM | __GFP_TRYLOCK)); if (!(gfp_mask & __GFP_DIRECT_RECLAIM)) { /* @@ -4509,7 +4518,8 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order, might_alloc(gfp_mask); - if (should_fail_alloc_page(gfp_mask, order)) + if (!(*alloc_flags & ALLOC_TRYLOCK) && + should_fail_alloc_page(gfp_mask, order)) return false; *alloc_flags = gfp_to_alloc_flags_cma(gfp_mask, *alloc_flags); @@ -7023,3 +7033,54 @@ static bool __free_unaccepted(struct page *page) } #endif /* CONFIG_UNACCEPTED_MEMORY */ + +struct page *try_alloc_pages_noprof(int nid, unsigned int order) +{ + gfp_t alloc_gfp = __GFP_NOWARN | __GFP_ZERO | + __GFP_NOMEMALLOC | __GFP_TRYLOCK; + unsigned int alloc_flags = ALLOC_TRYLOCK; + struct alloc_context ac = { }; + struct page *page; + + /* + * In RT spin_trylock() may call raw_spin_lock() which is unsafe in NMI. + * If spin_trylock() is called from hard IRQ the current task may be + * waiting for one rt_spin_lock, but rt_spin_trylock() will mark the + * task as the owner of another rt_spin_lock which will confuse PI + * logic, so return immediately if called form hard IRQ or NMI. + * + * Note, irqs_disabled() case is ok. This function can be called + * from raw_spin_lock_irqsave region. + */ + if (IS_ENABLED(CONFIG_PREEMPT_RT) && (in_nmi() || in_hardirq())) + return NULL; + if (!pcp_allowed_order(order)) + return NULL; + +#ifdef CONFIG_UNACCEPTED_MEMORY + if (has_unaccepted_memory() && !list_empty(&zone->unaccepted_pages)) + return NULL; +#endif + + if (nid == NUMA_NO_NODE) + nid = numa_node_id(); + + prepare_alloc_pages(alloc_gfp, order, nid, NULL, &ac, + &alloc_gfp, &alloc_flags); + + /* + * Best effort allocation from percpu free list. + * If it's empty attempt to spin_trylock zone->lock. + * Do not specify __GFP_KSWAPD_RECLAIM to avoid wakeup_kswapd + * that may need to grab a lock. + * Do not specify __GFP_ACCOUNT to avoid local_lock. + * Do not warn either. + */ + page = get_page_from_freelist(alloc_gfp, order, alloc_flags, &ac); + + /* Unlike regular alloc_pages() there is no __alloc_pages_slowpath(). */ + + trace_mm_page_alloc(page, order, alloc_gfp & ~__GFP_TRYLOCK, ac.migratetype); + kmsan_alloc_page(page, order, alloc_gfp); + return page; +}