From patchwork Thu Aug 18 14:31:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 12947991 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE376C00140 for ; Thu, 18 Aug 2022 21:28:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3C9B08D0002; Thu, 18 Aug 2022 17:28:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 379688D0001; Thu, 18 Aug 2022 17:28:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 219ED8D0002; Thu, 18 Aug 2022 17:28:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 0EC478D0001 for ; Thu, 18 Aug 2022 17:28:50 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id C202841A54 for ; Thu, 18 Aug 2022 21:28:49 +0000 (UTC) X-FDA: 79814003178.04.CB739F5 Received: from mail-qv1-f42.google.com (mail-qv1-f42.google.com [209.85.219.42]) by imf01.hostedemail.com (Postfix) with ESMTP id 35C67405AE for ; Thu, 18 Aug 2022 21:14:20 +0000 (UTC) Received: by mail-qv1-f42.google.com with SMTP id y4so2104547qvp.3 for ; Thu, 18 Aug 2022 14:14:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=wfVdiCm3ki6hY+6umh5GbzZ8nNKi0kDY1+Bd1ej/jM8=; b=RUKuxfm9E43E5qFQxMfS7R7LCmH7AoY/Gxkx5z0NnN217RoW+idxLKZwqN214SOPzH R7piyA5YeKysCY6mM/eUv7rthB/rLsOzSkIiO6t4LEipXfMIXU3WwDO0ARd+SR3DQWU1 tkAyiSTiSkiobEoeVQCt/ZI9GvL1pe3bJtnh8JJxazxASGrh8rnB1apZ+QqSExE7b2kC qD5Wc/SjTr44YPyHd3iQ7qvxVOILbekjFOJ7t7XHJTssYPcUuTdvujBMfI6eyq6W+J+q KvIEc61DmQwriVen4GenWzpC5f0TiuwN6owAG2711o9IGRTeXtqEAn03bgqrgWggFrhR pVPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=wfVdiCm3ki6hY+6umh5GbzZ8nNKi0kDY1+Bd1ej/jM8=; b=2dnaYBl/XgrFM3CQsVyPOodRuCb1roSI0ZPbfXgApM1lpPFMzKqp6J6U02APUW6FXV WlidYijxMLtxrnLMU/0Qvypj3XHMYVn11e5E/SjtnLfnBp8iVLpaeVY9mB7TKaujijZa +7Ijxy+Q7R2B1nta4DjCtWdMYSeDe4Rm0jhGwXUd/eAX2GKXqbR+wMi5L4Np6C5ewtBk 7ZAgqa0ah7ctrQymEOIPeGaTcZEhhPBBVLPoADaC9wIm+lktYoFFBjLBSChQ/5WP3k9z mY8EYP8m2Kn0hmqdye8Vei2rToe4UOzh2TUS6gYlFlml/O9bPqF+AwkpdQPw3v4LOqtY 7+LQ== X-Gm-Message-State: ACgBeo0uonxECV8j9YaV8u/xGtfSehKIgo2o2XtWD81xiIwJa/RBxDmB jWNC+R0uTKNLJm/8OaMNunWPyPw2lnPgSWjP X-Google-Smtp-Source: AA6agR7BKXRJhkd3R6jnBGYeXTOD46Z2q1NB8JbJ2YahjboeHT+Ec+2dxbjICtwclbkbfExrPr+dDw== X-Received: by 2002:a05:6a00:1709:b0:52e:677b:702b with SMTP id h9-20020a056a00170900b0052e677b702bmr3231480pfc.19.1660833119614; Thu, 18 Aug 2022 07:31:59 -0700 (PDT) Received: from vultr.guest ([45.32.72.237]) by smtp.gmail.com with ESMTPSA id h5-20020a63f905000000b003fdc16f5de2sm1379124pgi.15.2022.08.18.07.31.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Aug 2022 07:31:58 -0700 (PDT) From: Yafang Shao To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, kafai@fb.com, songliubraving@fb.com, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, songmuchun@bytedance.com, akpm@linux-foundation.org, tj@kernel.org, lizefan.x@bytedance.com Cc: cgroups@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, linux-mm@kvack.org, Yafang Shao , Andrii Nakryiko Subject: [PATCH bpf-next v2 07/12] bpf: Introduce new helpers bpf_ringbuf_pages_{alloc,free} Date: Thu, 18 Aug 2022 14:31:13 +0000 Message-Id: <20220818143118.17733-8-laoar.shao@gmail.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220818143118.17733-1-laoar.shao@gmail.com> References: <20220818143118.17733-1-laoar.shao@gmail.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660857262; a=rsa-sha256; cv=none; b=rqT3hOmbpznR8HdgYAlTR4zU8mMKE+GbDw6Mn6YZnlUc9D/uGed1cYiYQOYFME/Lb6StTd nAZoRhQhU/4RRYpGugHskXlaygRO+kXOwG1ezFzusUpGVu9R/+1f/8EeA0WYkhjhwkanOF iq5hdr23dkI4+bvLMsQZtfFtoiu58W8= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=RUKuxfm9; spf=pass (imf01.hostedemail.com: domain of laoar.shao@gmail.com designates 209.85.219.42 as permitted sender) smtp.mailfrom=laoar.shao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660857262; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wfVdiCm3ki6hY+6umh5GbzZ8nNKi0kDY1+Bd1ej/jM8=; b=5UbIBqReOG++39G1JuviBbS5GM4vHnSsa/MbsKzQNvEN473nW3m7c1s2Ud8Gf1aiTNWvlP wSZ7QITod9huxPTN6uJw+k+q69jEZPkHS3fnGcZDWHbPENP6OpHXfF1YL6UIEvv3nZeSV+ gwSEOhgFdoglBPwkWc6AWBASMzWBxAM= X-Rspamd-Server: rspam11 Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=RUKuxfm9; spf=pass (imf01.hostedemail.com: domain of laoar.shao@gmail.com designates 209.85.219.42 as permitted sender) smtp.mailfrom=laoar.shao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-Stat-Signature: x4xgfqegty67ygn39tbhqkcjeeb4gbnx X-Rspamd-Queue-Id: 35C67405AE X-HE-Tag: 1660857260-592292 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Allocate pages related memory into the new helper bpf_ringbuf_pages_alloc(), then it can be handled as a single unit. Suggested-by: Andrii Nakryiko Signed-off-by: Yafang Shao Acked-by: Andrii Nakryiko --- kernel/bpf/ringbuf.c | 80 ++++++++++++++++++++++++++++++++++++---------------- 1 file changed, 56 insertions(+), 24 deletions(-) diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c index 5eb7820..1e7284c 100644 --- a/kernel/bpf/ringbuf.c +++ b/kernel/bpf/ringbuf.c @@ -59,6 +59,57 @@ struct bpf_ringbuf_hdr { u32 pg_off; }; +static void bpf_ringbuf_pages_free(struct page **pages, int nr_pages) +{ + int i; + + for (i = 0; i < nr_pages; i++) + __free_page(pages[i]); + bpf_map_area_free(pages, NULL); +} + +static struct page **bpf_ringbuf_pages_alloc(struct bpf_map *map, + int nr_meta_pages, + int nr_data_pages, + int numa_node, + const gfp_t flags) +{ + int nr_pages = nr_meta_pages + nr_data_pages; + struct mem_cgroup *memcg, *old_memcg; + struct page **pages, *page; + int array_size; + int i; + + memcg = bpf_map_get_memcg(map); + old_memcg = set_active_memcg(memcg); + array_size = (nr_meta_pages + 2 * nr_data_pages) * sizeof(*pages); + pages = bpf_map_area_alloc(array_size, numa_node, NULL); + if (!pages) + goto err; + + for (i = 0; i < nr_pages; i++) { + page = alloc_pages_node(numa_node, flags, 0); + if (!page) { + nr_pages = i; + goto err_free_pages; + } + pages[i] = page; + if (i >= nr_meta_pages) + pages[nr_data_pages + i] = page; + } + set_active_memcg(old_memcg); + bpf_map_put_memcg(memcg); + + return pages; + +err_free_pages: + bpf_ringbuf_pages_free(pages, nr_pages); +err: + set_active_memcg(old_memcg); + bpf_map_put_memcg(memcg); + return NULL; +} + static struct bpf_ringbuf *bpf_ringbuf_area_alloc(size_t data_sz, int numa_node, struct bpf_map *map) { @@ -67,10 +118,8 @@ static struct bpf_ringbuf *bpf_ringbuf_area_alloc(size_t data_sz, int numa_node, int nr_meta_pages = RINGBUF_PGOFF + RINGBUF_POS_PAGES; int nr_data_pages = data_sz >> PAGE_SHIFT; int nr_pages = nr_meta_pages + nr_data_pages; - struct page **pages, *page; struct bpf_ringbuf *rb; - size_t array_size; - int i; + struct page **pages; /* Each data page is mapped twice to allow "virtual" * continuous read of samples wrapping around the end of ring @@ -89,22 +138,11 @@ static struct bpf_ringbuf *bpf_ringbuf_area_alloc(size_t data_sz, int numa_node, * when mmap()'ed in user-space, simplifying both kernel and * user-space implementations significantly. */ - array_size = (nr_meta_pages + 2 * nr_data_pages) * sizeof(*pages); - pages = bpf_map_area_alloc(array_size, numa_node, map); + pages = bpf_ringbuf_pages_alloc(map, nr_meta_pages, nr_data_pages, + numa_node, flags); if (!pages) return NULL; - for (i = 0; i < nr_pages; i++) { - page = alloc_pages_node(numa_node, flags, 0); - if (!page) { - nr_pages = i; - goto err_free_pages; - } - pages[i] = page; - if (i >= nr_meta_pages) - pages[nr_data_pages + i] = page; - } - rb = vmap(pages, nr_meta_pages + 2 * nr_data_pages, VM_MAP | VM_USERMAP, PAGE_KERNEL); if (rb) { @@ -114,10 +152,6 @@ static struct bpf_ringbuf *bpf_ringbuf_area_alloc(size_t data_sz, int numa_node, return rb; } -err_free_pages: - for (i = 0; i < nr_pages; i++) - __free_page(pages[i]); - bpf_map_area_free(pages, NULL); return NULL; } @@ -188,12 +222,10 @@ static void bpf_ringbuf_free(struct bpf_ringbuf *rb) * to unmap rb itself with vunmap() below */ struct page **pages = rb->pages; - int i, nr_pages = rb->nr_pages; + int nr_pages = rb->nr_pages; vunmap(rb); - for (i = 0; i < nr_pages; i++) - __free_page(pages[i]); - bpf_map_area_free(pages, NULL); + bpf_ringbuf_pages_free(pages, nr_pages); } static void ringbuf_map_free(struct bpf_map *map)