From patchwork Sat Nov 16 01:48:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 13877455 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B39BD68BDD for ; Sat, 16 Nov 2024 01:49:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AB4236B00BE; Fri, 15 Nov 2024 20:49:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A64216B00BF; Fri, 15 Nov 2024 20:49:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 904E76B00C0; Fri, 15 Nov 2024 20:49:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 7210C6B00BE for ; Fri, 15 Nov 2024 20:49:01 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 2086D40BFF for ; Sat, 16 Nov 2024 01:49:01 +0000 (UTC) X-FDA: 82790273496.22.6BBDD12 Received: from mail-pf1-f181.google.com (mail-pf1-f181.google.com [209.85.210.181]) by imf18.hostedemail.com (Postfix) with ESMTP id D21A31C0004 for ; Sat, 16 Nov 2024 01:48:38 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=l6AHvjrg; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf18.hostedemail.com: domain of alexei.starovoitov@gmail.com designates 209.85.210.181 as permitted sender) smtp.mailfrom=alexei.starovoitov@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1731721675; a=rsa-sha256; cv=none; b=vEAl66+L+ej6LNxX3kEjmFxOathf9JmrW4O9ZBAShkn4fnA1kTuLrXO4ENeQ0GZCs/3kjU UNsXdacnk/3JoVvVmdE1brAohf16UjYt7ixIwPfNQ5jW/98NT3blJvFJ++9736b0cOhT1I xgfG0y2SBNDKNbCZkB9o0duzBTnX7N8= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=l6AHvjrg; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf18.hostedemail.com: domain of alexei.starovoitov@gmail.com designates 209.85.210.181 as permitted sender) smtp.mailfrom=alexei.starovoitov@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1731721675; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=+79Kfaw5fe7Fc2LoAbnH3ncEOsPPAzqv/6GDpIAw8Gs=; b=ZMXjqxGEG7eSFnvGSTnmsblxvaNvt5W2uJv0E5D1c2050HCHMhX2bJso6mxQLy1GgVDEi3 koGp6nljoZif4ODG4daG5T0zw2AsfRFKQyy4Ljxn+E2wQZANLWvkhG9YbWoiGa55IgYFQg 03Cf7SfMKaaSM5eoz+Nhs3tPDB2W4LY= Received: by mail-pf1-f181.google.com with SMTP id d2e1a72fcca58-72487ebd2f5so120684b3a.1 for ; Fri, 15 Nov 2024 17:48:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1731721738; x=1732326538; darn=kvack.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=+79Kfaw5fe7Fc2LoAbnH3ncEOsPPAzqv/6GDpIAw8Gs=; b=l6AHvjrgBwnorhlEa3sV1Q6g3DItUjQcwCwso6LAm+lYrZsjb4I5XWkSQx3EMQlUIb WR2kawE6miT7YR0axIs3RG3/1EBFvhry33qxEr/omXyCi6mO2nCRlHSb3WE1j1FFhDic d5wMFTOOSCwS420pHEPgetOyb8s8wqD+4AcWnUjOKUWqbThRKVNx9MIRNlP1/97e3cAY 0U24udRFAROCqANBAS+HOXCI8mUJgDhcTYOCRYhi50dTUrSrMmMDe1qRcfmaxuyfof6b IDd0AH7342PQLSExwEHoso9w2OT9AbN62zAJNuuhTzI+TnN2Q+vsPZQKYFeuGbZsK/Vf SABg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731721738; x=1732326538; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=+79Kfaw5fe7Fc2LoAbnH3ncEOsPPAzqv/6GDpIAw8Gs=; b=VCWtZg2jUOxTdLCmvhJ2tU/rLEjFp3CPnKJtpivkef08mm+xiGmm2AdopdnHWeCHNx rI5/X3wGoEKTme5YhcgVrB4DCvELjlfLPXJlBPS9GTPtt7Xo4lmHYIZ6QeEBC2xbi2ZF UqNKHDcUl+5ybqyyPYtFfTQR+OIzsZtW9Yn8+SZx+LrkmJ0PVLWLfU4I/1cnLrFHkzXO XPql6zgHSdWZ9oJ7/RuX5mo+ZusAzKCte7YgL4idx51RizRoacMwOH86tXQ1xviI4Zpp HQr/sFPSWlgtWLGisJAaKYBgg68zonX1cg/kt7xCs4zI4h0QzYJiVJ+cFPPV1yy4kaqx My6A== X-Forwarded-Encrypted: i=1; AJvYcCUFReMXFZ/6izYoHN2TjapO9vjQ7qiRkI1UGd5PWvZrLnusQqZ1qd6WnK8Rv6xS4YBUYb/HRXZqbQ==@kvack.org X-Gm-Message-State: AOJu0Yyy+3b58izRYsin7+iAl1m5rIOypuZurJGNSS4zhR050U6Y+z8x tVrz3N7OKMfc4TJztvdmwSqOTVPf3u6IuVyAAE2qsDl1Ij1oD716 X-Google-Smtp-Source: AGHT+IGOpIEDJrOXhIngnIgv7phc33Efc94Kd3O31wdErbdCJAFeh7ZHECk8Etg5vaSZXZlPndCtDA== X-Received: by 2002:a05:6a00:10ce:b0:71e:3b8f:92e with SMTP id d2e1a72fcca58-72476b72b2bmr6538531b3a.3.1731721737747; Fri, 15 Nov 2024 17:48:57 -0800 (PST) Received: from macbook-pro-49.lan ([2603:3023:16e:5000:1863:9460:a110:750b]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-724771e1953sm2049389b3a.136.2024.11.15.17.48.56 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Fri, 15 Nov 2024 17:48:57 -0800 (PST) From: Alexei Starovoitov To: bpf@vger.kernel.org Cc: andrii@kernel.org, memxor@gmail.com, akpm@linux-foundation.org, peterz@infradead.org, vbabka@suse.cz, houtao1@huawei.com, hannes@cmpxchg.org, shakeel.butt@linux.dev, mhocko@suse.com, tj@kernel.org, linux-mm@kvack.org, kernel-team@fb.com Subject: [PATCH bpf-next 1/2] mm, bpf: Introduce __GFP_TRYLOCK for opportunistic page allocation Date: Fri, 15 Nov 2024 17:48:53 -0800 Message-Id: <20241116014854.55141-1-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: D21A31C0004 X-Rspamd-Server: rspam01 X-Stat-Signature: dzrpbhrus8o391qu1dmmr3g4gooe74x1 X-HE-Tag: 1731721718-193380 X-HE-Meta: U2FsdGVkX18pp/x1lvL//xcgZ3lNQuwlJceDQ73fSp6NMYyyrFF0VGtLKmrWjHJ7G2iyN4rQOBV1DYrt7EYdxbwBmjlwojz14HVUvnhv47q/F+OyvFm69xFpCppW9sNaVSHV5/Vdq5hkcdaPfLBU5NbwbMO7b6a4fiMgKHVCL6nY6GY8VOOGBbYM2jPK24UoGgnDj3tK58PzBEM+E7At0GrHYjTaFKQuRL6LruEZkBCAGOWqmudW3HOu+orsc5mHulTw7ZyqJ+reIoKXKzwK7OMrzT3Fk0slUJiYXC0YaQjKosKy6MD2CmhC2ekIlrTMMo2y1SDeGsHcqH0AEaG/ZQfeszi1jZGxvBFa+LGfjQRdNZCLS4UJsM29UgOOWSWKeLkj4Z9iFi+24Bnt3Le9Xy5zN7++MOv8No8OpCr5RttZW9rC8wmaO15xa5OU8N/lgIrn+fKqgK/4zD3JdEqihLrL0YHG8qbE3XiTbsONo5AiTglAZnOmFAyXj8JorU/QprFSKSvPFD93byvQ97Ra+beMeMRTbE8Z5yQ0esmkRJkeWEhx+lAnbvBjiTKkYTsV4Z8J3gTkQacfVsfFQJ/vTOvgRucwo/Nb9YfddW46Kg8ua1w5sQHZXaRFLhmPvVaGeH2xk60cO/eqUO8CQsLUmgb4m+c18V9sLyu/pPX/10Vp31n8dfMKTbnR+2tPkMQ0eoOJ6JOBjNbLbqXb9iqg6XDw43QLY/sZ+d4GzAGX8Bfxaxpbl48Kzii54W26vdc4fox/zF7h1djnDxoJHUu+GJQZbwkxPPV5FjFS1W9+r4nhl7n1oTmyUaoDHsURsgi9K1haKor1bm3dhgvsaoiVKDJunN4w7vYLifMo5hcnkLDNRRBurLuWL0ocLWwzyRg6YW1IJLCBegOtNKkhDS+2R4iWjNZ3SNXaM7ef4wV/fLEGHw6KWHLY4N4vSbKv1nl7uub4P7nE1RCrGOuqL0A Xdp7vdof +HedsOIDVFM6EIDUSuoxtbuzt1PYqHJC173Wo3yZqqjQDae8XtT+LL7ZZyawaokttlKYiP1Ita5a3Vfb8u0pFdfOQKd+zMuoeZf1cxwXDdLn0waBYyfV+FYk2DFtgDXMpqrPkgWj2AEH1U+liiMrwSmoaledYXzegHQ6BIgLJ/hVCnu9u05f6vaPEQzwPvg4B7shkPcWwNr9GnVhxVhlHLZ59yfpr3cZBSxHiZBY/q2z3cC/Vur1HsyMArCGAscp0lkf/5BXKK56i6DjPIRzOtrRqbN5FeJ/FDhDDWPu3l862GiuOFC2p1k47scU+hQWRanHaPSo5oJ0nlQJ88zmjfi9vn7pnpfmupJI2W6TbRkHmx1swPJwOKMhA3yPlC0Xayv8JPyHtv46I0nuCcHvHKNTzCFuHQWxf5kSYlMBrfVqv2a6b7eoq9JGjCCCD8N3GHGVz4NrhA7Ut0Z7/zKnvl4ZCljcRyF73DSW5WnMX8MHehp8LWqe9ublEhuWAX/AN3gqTXxA6mJBx+Pk8z+J/0ba8Xy3AwwB27309Wp156iwx5/In8wv3b/yx2Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Alexei Starovoitov Tracing BPF programs execute from tracepoints and kprobes where running context is unknown, but they need to request additional memory. The prior workarounds were using pre-allocated memory and BPF specific freelists to satisfy such allocation requests. Instead, introduce __GFP_TRYLOCK flag that makes page allocator accessible from any context. It relies on percpu free list of pages that rmqueue_pcplist() should be able to pop the page from. If it fails (due to IRQ re-entrancy or list being empty) then try_alloc_page() attempts to spin_trylock zone->lock and refill percpu freelist as normal. BPF program may execute with IRQs disabled and zone->lock is sleeping in RT, so trylock is the only option. In theory we can introduce percpu reentrance counter and increment it every time spin_lock_irqsave(&zone->lock, flags) is used, but we cannot rely on it. Even if this cpu is not in page_alloc path the spin_lock_irqsave() is not safe, since BPF prog might be called from tracepoint where preemption is disabled. So trylock only. There is no attempt to make free_page() to be accessible from any context (yet). BPF infrastructure will asynchronously free pages from such contexts. memcg is also not charged in try_alloc_page() path. It has to be done asynchronously to avoid sleeping on local_lock_irqsave(&memcg_stock.stock_lock, flags). This is a first step towards supporting BPF requirements in SLUB and getting rid of bpf_mem_alloc. That goal was discussed at LSFMM: https://lwn.net/Articles/974138/ Signed-off-by: Alexei Starovoitov --- include/linux/gfp.h | 17 +++++++++++++++++ include/linux/gfp_types.h | 3 +++ include/trace/events/mmflags.h | 1 + mm/internal.h | 1 + mm/page_alloc.c | 19 ++++++++++++++++--- tools/perf/builtin-kmem.c | 1 + 6 files changed, 39 insertions(+), 3 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index a951de920e20..319d8906ef3f 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -347,6 +347,23 @@ static inline struct page *alloc_page_vma_noprof(gfp_t gfp, } #define alloc_page_vma(...) alloc_hooks(alloc_page_vma_noprof(__VA_ARGS__)) +static inline struct page *try_alloc_page_noprof(int nid) +{ + /* If spin_locks are not held and interrupts are enabled, use normal path. */ + if (preemptible()) + return alloc_pages_node_noprof(nid, GFP_NOWAIT | __GFP_ZERO, 0); + /* + * Best effort allocation from percpu free list. + * If it's empty attempt to spin_trylock zone->lock. + * Do not specify __GFP_KSWAPD_RECLAIM to avoid wakeup_kswapd + * that may need to grab a lock. + * Do not specify __GFP_ACCOUNT to avoid local_lock. + * Do not warn either. + */ + return alloc_pages_node_noprof(nid, __GFP_TRYLOCK | __GFP_NOWARN | __GFP_ZERO, 0); +} +#define try_alloc_page(nid) alloc_hooks(try_alloc_page_noprof(nid)) + extern unsigned long get_free_pages_noprof(gfp_t gfp_mask, unsigned int order); #define __get_free_pages(...) alloc_hooks(get_free_pages_noprof(__VA_ARGS__)) diff --git a/include/linux/gfp_types.h b/include/linux/gfp_types.h index 65db9349f905..72b385a7888d 100644 --- a/include/linux/gfp_types.h +++ b/include/linux/gfp_types.h @@ -48,6 +48,7 @@ enum { ___GFP_THISNODE_BIT, ___GFP_ACCOUNT_BIT, ___GFP_ZEROTAGS_BIT, + ___GFP_TRYLOCK_BIT, #ifdef CONFIG_KASAN_HW_TAGS ___GFP_SKIP_ZERO_BIT, ___GFP_SKIP_KASAN_BIT, @@ -86,6 +87,7 @@ enum { #define ___GFP_THISNODE BIT(___GFP_THISNODE_BIT) #define ___GFP_ACCOUNT BIT(___GFP_ACCOUNT_BIT) #define ___GFP_ZEROTAGS BIT(___GFP_ZEROTAGS_BIT) +#define ___GFP_TRYLOCK BIT(___GFP_TRYLOCK_BIT) #ifdef CONFIG_KASAN_HW_TAGS #define ___GFP_SKIP_ZERO BIT(___GFP_SKIP_ZERO_BIT) #define ___GFP_SKIP_KASAN BIT(___GFP_SKIP_KASAN_BIT) @@ -293,6 +295,7 @@ enum { #define __GFP_COMP ((__force gfp_t)___GFP_COMP) #define __GFP_ZERO ((__force gfp_t)___GFP_ZERO) #define __GFP_ZEROTAGS ((__force gfp_t)___GFP_ZEROTAGS) +#define __GFP_TRYLOCK ((__force gfp_t)___GFP_TRYLOCK) #define __GFP_SKIP_ZERO ((__force gfp_t)___GFP_SKIP_ZERO) #define __GFP_SKIP_KASAN ((__force gfp_t)___GFP_SKIP_KASAN) diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h index bb8a59c6caa2..592c93ee5f35 100644 --- a/include/trace/events/mmflags.h +++ b/include/trace/events/mmflags.h @@ -50,6 +50,7 @@ gfpflag_string(__GFP_RECLAIM), \ gfpflag_string(__GFP_DIRECT_RECLAIM), \ gfpflag_string(__GFP_KSWAPD_RECLAIM), \ + gfpflag_string(__GFP_TRYLOCK), \ gfpflag_string(__GFP_ZEROTAGS) #ifdef CONFIG_KASAN_HW_TAGS diff --git a/mm/internal.h b/mm/internal.h index 64c2eb0b160e..c1b08e95a63b 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1173,6 +1173,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, #endif #define ALLOC_HIGHATOMIC 0x200 /* Allows access to MIGRATE_HIGHATOMIC */ #define ALLOC_KSWAPD 0x800 /* allow waking of kswapd, __GFP_KSWAPD_RECLAIM set */ +#define ALLOC_TRYLOCK 0x1000000 /* Only use spin_trylock in allocation path */ /* Flags that allow allocations below the min watermark. */ #define ALLOC_RESERVES (ALLOC_NON_BLOCK|ALLOC_MIN_RESERVE|ALLOC_HIGHATOMIC|ALLOC_OOM) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 216fbbfbedcf..71fed4f5bd0c 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2304,7 +2304,12 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order, unsigned long flags; int i; - spin_lock_irqsave(&zone->lock, flags); + if (unlikely(alloc_flags & ALLOC_TRYLOCK)) { + if (!spin_trylock_irqsave(&zone->lock, flags)) + return 0; + } else { + spin_lock_irqsave(&zone->lock, flags); + } for (i = 0; i < count; ++i) { struct page *page = __rmqueue(zone, order, migratetype, alloc_flags); @@ -2904,7 +2909,12 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone, do { page = NULL; - spin_lock_irqsave(&zone->lock, flags); + if (unlikely(alloc_flags & ALLOC_TRYLOCK)) { + if (!spin_trylock_irqsave(&zone->lock, flags)) + return 0; + } else { + spin_lock_irqsave(&zone->lock, flags); + } if (alloc_flags & ALLOC_HIGHATOMIC) page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); if (!page) { @@ -4001,6 +4011,7 @@ gfp_to_alloc_flags(gfp_t gfp_mask, unsigned int order) */ BUILD_BUG_ON(__GFP_HIGH != (__force gfp_t) ALLOC_MIN_RESERVE); BUILD_BUG_ON(__GFP_KSWAPD_RECLAIM != (__force gfp_t) ALLOC_KSWAPD); + BUILD_BUG_ON(__GFP_TRYLOCK != (__force gfp_t) ALLOC_TRYLOCK); /* * The caller may dip into page reserves a bit more if the caller @@ -4009,7 +4020,7 @@ gfp_to_alloc_flags(gfp_t gfp_mask, unsigned int order) * set both ALLOC_NON_BLOCK and ALLOC_MIN_RESERVE(__GFP_HIGH). */ alloc_flags |= (__force int) - (gfp_mask & (__GFP_HIGH | __GFP_KSWAPD_RECLAIM)); + (gfp_mask & (__GFP_HIGH | __GFP_KSWAPD_RECLAIM | __GFP_TRYLOCK)); if (!(gfp_mask & __GFP_DIRECT_RECLAIM)) { /* @@ -4509,6 +4520,8 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order, might_alloc(gfp_mask); + *alloc_flags |= (__force int) (gfp_mask & __GFP_TRYLOCK); + if (should_fail_alloc_page(gfp_mask, order)) return false; diff --git a/tools/perf/builtin-kmem.c b/tools/perf/builtin-kmem.c index a756147e2eec..d245ff60d2a6 100644 --- a/tools/perf/builtin-kmem.c +++ b/tools/perf/builtin-kmem.c @@ -682,6 +682,7 @@ static const struct { { "__GFP_RECLAIM", "R" }, { "__GFP_DIRECT_RECLAIM", "DR" }, { "__GFP_KSWAPD_RECLAIM", "KR" }, + { "__GFP_TRYLOCK", "TL" }, }; static size_t max_gfp_len; From patchwork Sat Nov 16 01:48:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexei Starovoitov X-Patchwork-Id: 13877456 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 619F9D68BDA for ; Sat, 16 Nov 2024 01:49:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E851F6B00C1; Fri, 15 Nov 2024 20:49:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E35CA6B00C5; Fri, 15 Nov 2024 20:49:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CD5546B00C6; Fri, 15 Nov 2024 20:49:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id A75C56B00C1 for ; Fri, 15 Nov 2024 20:49:05 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 60C0CAD188 for ; Sat, 16 Nov 2024 01:49:05 +0000 (UTC) X-FDA: 82790272698.07.CA4D019 Received: from mail-pj1-f44.google.com (mail-pj1-f44.google.com [209.85.216.44]) by imf20.hostedemail.com (Postfix) with ESMTP id 1DC671C000D for ; Sat, 16 Nov 2024 01:48:05 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=e0FLtAH7; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf20.hostedemail.com: domain of alexei.starovoitov@gmail.com designates 209.85.216.44 as permitted sender) smtp.mailfrom=alexei.starovoitov@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1731721679; a=rsa-sha256; cv=none; b=j0d8oqGOsMKcxD0kEVZ3Lok51lqNLtCQxO6w4sS+YLTBAIeoAWsqohI9NZZI3pIT+Rs3H9 1vxwZUq8oVp0+9Cd/Apebc29rD6/zAXLRr1iJMmIHfd2nVTh781vEwBpQO8NbgKIJe1ysz vf2B8cJ/GVa17kOezefQ04V596jJE4U= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=e0FLtAH7; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf20.hostedemail.com: domain of alexei.starovoitov@gmail.com designates 209.85.216.44 as permitted sender) smtp.mailfrom=alexei.starovoitov@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1731721679; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=EQx3r5mMfkKCKYgKnvoSO6BlBctlynKxVbPBT4K9QIk=; b=Ztv4kqL3PE5vp6EjAuQeglzKLRySNICXCuTYNL91J2/Up61ovHZFg8veT8WpTw2DLUv3C7 7NHYXf3c7a1l/oeCGKgl7QmhoxM7LwwrqXrn1jqYWitvOq7aXc85a7VWWkyK/E0JkUeEuV rXH5ts69gfaYBGOogkGwopJlGihDgbs= Received: by mail-pj1-f44.google.com with SMTP id 98e67ed59e1d1-2e9b55b83d2so1037148a91.3 for ; Fri, 15 Nov 2024 17:49:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1731721742; x=1732326542; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=EQx3r5mMfkKCKYgKnvoSO6BlBctlynKxVbPBT4K9QIk=; b=e0FLtAH7FDd9Vsit0GvcQ7E1j7wx/a+S3KHl9/GrRbMiR07rUZnLUro02p07wQRpsc KJ+ep4dVvQ8ElfoouuMydX7MjIW6K9nKAIaYoDmATtOC3Ph2bVN/+ptoJ7ufTNvTfxlZ hIdWgGyVBnoteh7ELA5gaRcfjQvb5pGfcjHS/DGYA93iQ+XlVl0Zg3FQp8upN524oltO iCX0dK+NKUSXP4fi1xLZsfCXNvroS6OZxaSlUGi9ZDaHKf29jAfXKClIk1kfpRV+st1o RTPpGzI1T+Z/sogQ3lsxbRZajEmtXgNMkkWBQ7W40Jb+CIsDkJ0iR0sj5R5AIFMKY+II fJiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731721742; x=1732326542; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=EQx3r5mMfkKCKYgKnvoSO6BlBctlynKxVbPBT4K9QIk=; b=I4rO7FuX/pH5L62BfX5q/Ivk+1ih9Awax93xvIWlvOFxjtZbePstPDqUvYQXvRP5o8 /hsJn2IJFenEPP3FVox8iHaK8Ctx5qbuqqQ/sqhH8tTqktLIrwzozfh3NvqTkf7tbWHY 40Aa4J1frs2nmizWwQ7Kw/49iSB9yD6CpnNOk4OIIQIzw8N+Xqf7NlMDaktaDG9BJvz1 sabVK0ZcnnAYBggq0hRn7YKdmqmOiSmC9iIY3jhhYn60zpVE5zLcbcrpSxczeInOE2+0 yyFsiTJjGprVOhzOVTd5YIE0xxa16C8MxLRVtxA5U4XBX1GhuBicKF8HE79DZrcVrIpR pjgw== X-Forwarded-Encrypted: i=1; AJvYcCUPeqXs4QQwK37W9rgpFgxOdqNnuKNlWY3+C9/e2hVbfZPakS6LN5bqeDvUgSGm+S9xVHU8KR3PXA==@kvack.org X-Gm-Message-State: AOJu0Yy3W3R+qjXFxDwXcAyJGg0WzPXUdfe7u1/VMUfhzPlfNY9x9n3Y IOdVJ6mKC/YV30cXvTjSJSBQ8nDmdoNzR6Dnmnn4x3WPaVU18hB1 X-Google-Smtp-Source: AGHT+IEgHpH9AjNO4IF0+21XofZgAxvjxp9bfZDCDBA88H84ASylOHL4q9vvZJ9MPtwKF8HTV/85lw== X-Received: by 2002:a17:90b:48c4:b0:2e2:bd7a:71ea with SMTP id 98e67ed59e1d1-2ea154cf3c7mr6322465a91.8.1731721742130; Fri, 15 Nov 2024 17:49:02 -0800 (PST) Received: from macbook-pro-49.lan ([2603:3023:16e:5000:1863:9460:a110:750b]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2ea02495986sm3606970a91.15.2024.11.15.17.48.59 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Fri, 15 Nov 2024 17:49:01 -0800 (PST) From: Alexei Starovoitov To: bpf@vger.kernel.org Cc: andrii@kernel.org, memxor@gmail.com, akpm@linux-foundation.org, peterz@infradead.org, vbabka@suse.cz, houtao1@huawei.com, hannes@cmpxchg.org, shakeel.butt@linux.dev, mhocko@suse.com, tj@kernel.org, linux-mm@kvack.org, kernel-team@fb.com Subject: [PATCH bpf-next 2/2] bpf: Use try_alloc_page() to allocate pages for bpf needs. Date: Fri, 15 Nov 2024 17:48:54 -0800 Message-Id: <20241116014854.55141-2-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20241116014854.55141-1-alexei.starovoitov@gmail.com> References: <20241116014854.55141-1-alexei.starovoitov@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 1DC671C000D X-Rspamd-Server: rspam01 X-Stat-Signature: y63gmypy5ys391549jisuj85j961y5mq X-HE-Tag: 1731721685-808896 X-HE-Meta: U2FsdGVkX19YBRdd/ZBQpBtEuxpVT2nKoTpAqSN8uaSnv5XLQbgpa82LfgrMbI7ImWabHDFJ/HSm9fiMxxgiIod4lcZj1f71WTxbkuD6M6Dgr0cfJOxmIGSlbU1i8RRJofrPMT9xFe6cvOpUy7tbAPmXcbPyrWHfwNIApospt5u8XMnN4EiG41Q9+cvzY8vfmXURD0smBF3kDsY23z6USGJPUmrHQ0/4OcpBjdEpTVtJ231DpXZfllhUxO39c+LWR66bVXoRgWNMxH5QJ/bpjpvXO47KTsWROoTHEhHWTIO36YnS2A0qajRuFqmAPfudx/I7Y0+znHv4+1PpJrT02fXZBpJ0PWC36PRiCJRUaxAN12x/TGOLK9b7D2cFtCD870e5YwLGrb67dH+UhN3PzXo5J+SorCNngmM0WJGG92yRHx7ngsELDd1xCaweZO+eQPmhkXH951PrW/3hIWnICcFkaJw+vPkFKRKozWBsvWIMsoDQPGxtLLLa08shzLUTAdcoZYBpjkp9DMAw9SmQox73fiZavy412cAIN/2y++s7gOuHP6ecznNFk9eTYtIq+QN29cA6WYf42IEi+X6dB1Or2yeulfAZTuJoyXTOE0eZyAy2MDYmD81cy36Y3LKvQQmAtlmlMEF9wI8BwbeS+ZnnOMWU4D7WFLoWG8ViiY8mgmhI3gsvngZP3lGIp7gUWkr25PwZUZd77T9wpEdoO06thgXJUZ6GiKhiHaitJfYV3dDFatVi2Iu6/hqqmtoGurWFbtjCQR1F2ETfKz7ffaW47JNm4xhBjtSJb/WtTNSbbzN6YeuAvxnNOivbV8NJpuK0NJiPnWEUEPL4HN6Rhr2OGy0fDhj0qCmy+rJfxPKV0GDt46nJuEFQbG+uqbXF9v5goBUVsM2o8pew/rawwviAB297VDRd/ZXlLY0mA9AgUw7l0qO7dnEeaquajxpEe+UQSVWX6+g+0WIvVhU SUDsKqRu +SZFnl3PvzwkfEq6ztMuz602aif8lfPkc47bU+g8j2FL9uNk3QxIu4rfB8eMjNXKCkUmsd7t13NMMoox+W9lBFj+8ox1arPB8fSD4Q9jQHeNu3owb4+i7P8pPwUNVNwW7SWkNZa+syRwTNkmmgH0gSqU7V5aCRz+baYyegz0AVCJqR2hV9TtR3+J0ewEBsw4UThBbxtgr1oFNu39e+9DB307KKmlarTJlIOQyypmbQliBoYcI2RVhYl4a4oaiiVH0JCOcsvzOqArrlx6pOP014ba0uqBw/kzzz+aOLnoNsNUfxqSXde2yM8meVmDATBRcBxN0r5DGV1ygMOv5G9Jmrdgr0F25PcYQec2ONxzciKkcrN9of9WsoCeLo8t4iSdHKt+jy0UYQnSGea+44+2TBo+kyfav6bjqU25d+aRJFB0/TmAqPBsJ7yNuEUYisukiHoICwxMR0GS6a9eCj5g9lszhoaB/vI7WKBQUJtJSrUQL37Tg5JxjDGyaow== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Alexei Starovoitov Incomplete patch. If the __GFP_TRYLOCK approach is acceptable the support for memcg charging and async page freeing will follow. Signed-off-by: Alexei Starovoitov --- kernel/bpf/syscall.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 58190ca724a2..26e6cffb2fe9 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -581,12 +581,14 @@ int bpf_map_alloc_pages(const struct bpf_map *map, gfp_t gfp, int nid, old_memcg = set_active_memcg(memcg); #endif for (i = 0; i < nr_pages; i++) { - pg = alloc_pages_node(nid, gfp | __GFP_ACCOUNT, 0); + /* TODO: add async memcg charge */ + pg = try_alloc_page(nid); if (pg) { pages[i] = pg; continue; } + /* TODO: add async page free */ for (j = 0; j < i; j++) __free_page(pages[j]); ret = -ENOMEM;