From patchwork Mon Nov 6 02:44:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13446230 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F673C41535 for ; Mon, 6 Nov 2023 02:44:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230195AbjKFCo3 (ORCPT ); Sun, 5 Nov 2023 21:44:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58400 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230145AbjKFCoY (ORCPT ); Sun, 5 Nov 2023 21:44:24 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 66548FA for ; Sun, 5 Nov 2023 18:44:21 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5a9012ab0adso56422427b3.1 for ; Sun, 05 Nov 2023 18:44:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699238660; x=1699843460; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BYi74RniIHsB6Wn3ca/EzKecGeVK7nuy82fMJ6xIaLE=; b=ekokvDac7hthXuwHazBZXnazZwZOIg13sd3g4I+6yrjH/HnpYtSg5bQLFENcm4KKJX 0cxb+MKVFM0WzGpZzmhK5wV/arz8D4rMqEExLFR7o8WJd702A7n/X05V3E4zpN+IK/zx OMQPhP+ReOyz1BXPXhZlRC297pcWXvurZvF5SggVf8FEG/JpDP2USI2vKnJxd8OicjBX 6HKgUJQfXCzj5NYuqh5QKegGcs1QN1tXV+YtTDqpH/ObaIobxwCLnwOKoO6ft8AyOObl HrvbYbTsUXg7gi7Pt9WlCue5fuc+ty+nd3+hgllmHUXd+ESaHIo/pn5q90mwcafx6kBw uyNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699238660; x=1699843460; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BYi74RniIHsB6Wn3ca/EzKecGeVK7nuy82fMJ6xIaLE=; b=p5Ic48WJDpGH4biLtfLqmfVOF9PpEv8NNVDiP6mMdDrPeCFt9bb3iiG0UcMomymC8K pUHw1x7nQMe57zP6lHc2uur2jvvqyl3c5qh1AsbRmuE+GrOjK8SSFjT3PF2zqfkJifhS bNvA2NW7EaeLD/pjq2YD70WoQ09glwiJgXrdee0Y3GzaiajfYfUFxEO1JLcVJtQYw1Ey SwLiJh4/Apf6vxIfIMOKsf+BXJSWqHpiK/b1uU9IvP2s59BYjf3LI5H1FwFvAEH8rIAw 6pgxOyO8F5MG6aaylDKcCyDJ1yzEFM88a5QWtD43LMldaDQBRpkFIcXwz36bIZWIogJA BDFA== X-Gm-Message-State: AOJu0Yyhg6FnX1oSh0RDzbAFoMgG4CzSFKg894aNMlL6dlIGwM69b5fN Cq7j7NuPOAejhfqod5Hc/uMcBoQSCo5gzx3uPA== X-Google-Smtp-Source: AGHT+IE9YqB9SYLUxgQ5eT9abp+8TWDhj3Fso2Go2CEiYjAa/XP+wuQiIsvJrogX9aN67VSXhqeSBYMoIbWBDLOY2A== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:35de:fff:97b7:db3e]) (user=almasrymina job=sendgmr) by 2002:a25:ac04:0:b0:d9a:520f:1988 with SMTP id w4-20020a25ac04000000b00d9a520f1988mr525338ybi.4.1699238660358; Sun, 05 Nov 2023 18:44:20 -0800 (PST) Date: Sun, 5 Nov 2023 18:44:00 -0800 In-Reply-To: <20231106024413.2801438-1-almasrymina@google.com> Mime-Version: 1.0 References: <20231106024413.2801438-1-almasrymina@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231106024413.2801438-2-almasrymina@google.com> Subject: [RFC PATCH v3 01/12] net: page_pool: factor out releasing DMA from releasing the page From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Cc: Mina Almasry , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jesper Dangaard Brouer , Ilias Apalodimas , Arnd Bergmann , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Jakub Kicinski Releasing the DMA mapping will be useful for other types of pages, so factor it out. Make sure compiler inlines it, to avoid any regressions. Signed-off-by: Jakub Kicinski Signed-off-by: Mina Almasry --- This is implemented by Jakub in his RFC: https://lore.kernel.org/netdev/f8270765-a27b-6ccf-33ea-cda097168d79@redhat.com/T/ I take no credit for the idea or implementation. This is a critical dependency of device memory TCP and thus I'm pulling it into this series to make it revewable and mergable. --- net/core/page_pool.c | 25 ++++++++++++++++--------- 1 file changed, 16 insertions(+), 9 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 5e409b98aba0..578b6f2eeb46 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -514,21 +514,16 @@ static s32 page_pool_inflight(struct page_pool *pool) return inflight; } -/* Disconnects a page (from a page_pool). API users can have a need - * to disconnect a page (from a page_pool), to allow it to be used as - * a regular page (that will eventually be returned to the normal - * page-allocator via put_page). - */ -static void page_pool_return_page(struct page_pool *pool, struct page *page) +static __always_inline +void __page_pool_release_page_dma(struct page_pool *pool, struct page *page) { dma_addr_t dma; - int count; if (!(pool->p.flags & PP_FLAG_DMA_MAP)) /* Always account for inflight pages, even if we didn't * map them */ - goto skip_dma_unmap; + return; dma = page_pool_get_dma_addr(page); @@ -537,7 +532,19 @@ static void page_pool_return_page(struct page_pool *pool, struct page *page) PAGE_SIZE << pool->p.order, pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING); page_pool_set_dma_addr(page, 0); -skip_dma_unmap: +} + +/* Disconnects a page (from a page_pool). API users can have a need + * to disconnect a page (from a page_pool), to allow it to be used as + * a regular page (that will eventually be returned to the normal + * page-allocator via put_page). + */ +void page_pool_return_page(struct page_pool *pool, struct page *page) +{ + int count; + + __page_pool_release_page_dma(pool, page); + page_pool_clear_pp_info(page); /* This may be the last page returned, releasing the pool, so From patchwork Mon Nov 6 02:44:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13446231 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2485C001B5 for ; Mon, 6 Nov 2023 02:44:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230211AbjKFCoa (ORCPT ); Sun, 5 Nov 2023 21:44:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45216 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230151AbjKFCo1 (ORCPT ); Sun, 5 Nov 2023 21:44:27 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 93B14136 for ; Sun, 5 Nov 2023 18:44:23 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5ae5b12227fso56468667b3.0 for ; Sun, 05 Nov 2023 18:44:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699238663; x=1699843463; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=2aWBuORvTMxiaxkeVBpZ19MJEVfIpSc8yfaq5OEMLWc=; b=3fASEIAw+83aeyOPu29o+h9e9Py/WA3mB46W/RzTuxb5g5o9BZanbIUVaUumEMHy9o frqRfDo8Q3gjyTIYmA2v6GzWsPC14qQfKTlRvE5jCLdJ/j0YveO/NOyVQplB165lZeuZ NuYK0pfyrwTi7QtGrHXTwbaYcTTp00wu4oYFTcz8fq16lOuATHbTnvXy0I8UT13P6i7x IB3SL0mL29a99hli8I05b/sp0F7kvL+sF2aVoOYtW1aspfbOeXjoEMcfYvo6EOmnQETH Sb6BLKvGdH+EXSFm6oTa2iGlPUs4US/OiwD3HXkCLJ/mNaB7aB7wmd9kVDZD0COMIkyl d5Hg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699238663; x=1699843463; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2aWBuORvTMxiaxkeVBpZ19MJEVfIpSc8yfaq5OEMLWc=; b=fyxcMjFOISYMT/wKdCzBYg2uxvs91jpgsLaQFVm0dzJ38EGApas7Urfif22DJyPYqh gy1p5mfYptniYoYe+kvacjISw31BoVctu+g1pC3m4Prw0ZL+k7FAFyT5FFfklxCOjYPJ VA6dFiPnV2a8eDndafOJSIpW+sFfZbjCdbPjhmI2sR2+VG12RKIl7tzLly53j2oqJhjQ xt48ycppJE1F2RuCwY0j22KFKDduCCXAWvgUa+g9uqLxCvHYvBwmp+AsE1YpzVumrVgd vhir6Eb3GEpyif5I+WJuHnBuxBId84J54UVNw23k7sTDe+CMzLK+t0dP5AomWBDr8dyQ yBcw== X-Gm-Message-State: AOJu0YwNj0Ae7n8dk7YvBORAv5zT2YET62OWOEj/eMHxkuL0/YUNzc03 vUiFaMzdmGoplVOWNXqrDyesrXG0DE+AZFc8Mg== X-Google-Smtp-Source: AGHT+IEw+DPriD5kGCKLp7L9RvV3A2PIEZmUjmOXZu2L4QcO1pjeWSWUha8L2tTFId6zhwFVRynJBYuZ216EpkRCrw== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:35de:fff:97b7:db3e]) (user=almasrymina job=sendgmr) by 2002:a25:688b:0:b0:da0:46e5:a7ef with SMTP id d133-20020a25688b000000b00da046e5a7efmr496838ybc.3.1699238662743; Sun, 05 Nov 2023 18:44:22 -0800 (PST) Date: Sun, 5 Nov 2023 18:44:01 -0800 In-Reply-To: <20231106024413.2801438-1-almasrymina@google.com> Mime-Version: 1.0 References: <20231106024413.2801438-1-almasrymina@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231106024413.2801438-3-almasrymina@google.com> Subject: [RFC PATCH v3 02/12] net: page_pool: create hooks for custom page providers From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Cc: Mina Almasry , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jesper Dangaard Brouer , Ilias Apalodimas , Arnd Bergmann , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org From: Jakub Kicinski The page providers which try to reuse the same pages will need to hold onto the ref, even if page gets released from the pool - as in releasing the page from the pp just transfers the "ownership" reference from pp to the provider, and provider will wait for other references to be gone before feeding this page back into the pool. Signed-off-by: Jakub Kicinski Signed-off-by: Mina Almasry --- This is implemented by Jakub in his RFC: https://lore.kernel.org/netdev/f8270765-a27b-6ccf-33ea-cda097168d79@redhat.com/T/ I take no credit for the idea or implementation; I only added minor edits to make this workable with device memory TCP, and removed some hacky test code. This is a critical dependency of device memory TCP and thus I'm pulling it into this series to make it revewable and mergable. --- include/net/page_pool/types.h | 18 +++++++++++++ net/core/page_pool.c | 51 +++++++++++++++++++++++++++++++---- 2 files changed, 64 insertions(+), 5 deletions(-) diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 6fc5134095ed..d4bea053bb7e 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -60,6 +60,8 @@ struct page_pool_params { int nid; struct device *dev; struct napi_struct *napi; + u8 memory_provider; + void *mp_priv; enum dma_data_direction dma_dir; unsigned int max_len; unsigned int offset; @@ -118,6 +120,19 @@ struct page_pool_stats { }; #endif +struct mem_provider; + +enum pp_memory_provider_type { + __PP_MP_NONE, /* Use system allocator directly */ +}; + +struct pp_memory_provider_ops { + int (*init)(struct page_pool *pool); + void (*destroy)(struct page_pool *pool); + struct page *(*alloc_pages)(struct page_pool *pool, gfp_t gfp); + bool (*release_page)(struct page_pool *pool, struct page *page); +}; + struct page_pool { struct page_pool_params p; @@ -165,6 +180,9 @@ struct page_pool { */ struct ptr_ring ring; + const struct pp_memory_provider_ops *mp_ops; + void *mp_priv; + #ifdef CONFIG_PAGE_POOL_STATS /* recycle stats are per-cpu to avoid locking */ struct page_pool_recycle_stats __percpu *recycle_stats; diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 578b6f2eeb46..7ea1f4682479 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -23,6 +23,8 @@ #include +static DEFINE_STATIC_KEY_FALSE(page_pool_mem_providers); + #define DEFER_TIME (msecs_to_jiffies(1000)) #define DEFER_WARN_INTERVAL (60 * HZ) @@ -172,6 +174,7 @@ static int page_pool_init(struct page_pool *pool, const struct page_pool_params *params) { unsigned int ring_qsize = 1024; /* Default */ + int err; memcpy(&pool->p, params, sizeof(pool->p)); @@ -225,10 +228,34 @@ static int page_pool_init(struct page_pool *pool, /* Driver calling page_pool_create() also call page_pool_destroy() */ refcount_set(&pool->user_cnt, 1); + switch (pool->p.memory_provider) { + case __PP_MP_NONE: + break; + default: + err = -EINVAL; + goto free_ptr_ring; + } + + pool->mp_priv = pool->p.mp_priv; + if (pool->mp_ops) { + err = pool->mp_ops->init(pool); + if (err) { + pr_warn("%s() mem-provider init failed %d\n", + __func__, err); + goto free_ptr_ring; + } + + static_branch_inc(&page_pool_mem_providers); + } + if (pool->p.flags & PP_FLAG_DMA_MAP) get_device(pool->p.dev); return 0; + +free_ptr_ring: + ptr_ring_cleanup(&pool->ring, NULL); + return err; } /** @@ -490,7 +517,10 @@ struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp) return page; /* Slow-path: cache empty, do real allocation */ - page = __page_pool_alloc_pages_slow(pool, gfp); + if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops) + page = pool->mp_ops->alloc_pages(pool, gfp); + else + page = __page_pool_alloc_pages_slow(pool, gfp); return page; } EXPORT_SYMBOL(page_pool_alloc_pages); @@ -542,10 +572,13 @@ void __page_pool_release_page_dma(struct page_pool *pool, struct page *page) void page_pool_return_page(struct page_pool *pool, struct page *page) { int count; + bool put; - __page_pool_release_page_dma(pool, page); - - page_pool_clear_pp_info(page); + put = true; + if (static_branch_unlikely(&page_pool_mem_providers) && pool->mp_ops) + put = pool->mp_ops->release_page(pool, page); + else + __page_pool_release_page_dma(pool, page); /* This may be the last page returned, releasing the pool, so * it is not safe to reference pool afterwards. @@ -553,7 +586,10 @@ void page_pool_return_page(struct page_pool *pool, struct page *page) count = atomic_inc_return_relaxed(&pool->pages_state_release_cnt); trace_page_pool_state_release(pool, page, count); - put_page(page); + if (put) { + page_pool_clear_pp_info(page); + put_page(page); + } /* An optimization would be to call __free_pages(page, pool->p.order) * knowing page is not part of page-cache (thus avoiding a * __page_cache_release() call). @@ -821,6 +857,11 @@ static void __page_pool_destroy(struct page_pool *pool) if (pool->disconnect) pool->disconnect(pool); + if (pool->mp_ops) { + pool->mp_ops->destroy(pool); + static_branch_dec(&page_pool_mem_providers); + } + ptr_ring_cleanup(&pool->ring, NULL); if (pool->p.flags & PP_FLAG_DMA_MAP) From patchwork Mon Nov 6 02:44:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13446232 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14E95C4167D for ; Mon, 6 Nov 2023 02:44:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230220AbjKFCob (ORCPT ); Sun, 5 Nov 2023 21:44:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58422 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230174AbjKFCo2 (ORCPT ); Sun, 5 Nov 2023 21:44:28 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9777B100 for ; Sun, 5 Nov 2023 18:44:25 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-da0cb98f66cso4614719276.2 for ; Sun, 05 Nov 2023 18:44:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699238665; x=1699843465; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=myyimfrqgp1IZWG/ujCwmnkNglmXYrRvMz92cYVRHqQ=; b=Pm9R3xL6bn9UQe2wumfaDV9vJnA5rdu7IRShE1NvhDHl/1ZeblbkO6Z+hOfYIoJ/PN IV+7AjyubuXQOJ7ZOVo8SnCEFRppgWULsAd8QtEhI8bdd233GmwL4U3jcFX0A30H+KUF 7Px/ZGnSNXxdiJPTHeGa3Wh8jHPl4IL6gCRBwbgcmeKQfg+9kiNzbkIoNrx/yAeQGdrF 7DTryPhfGvrAUua8viYVq8L/rrLWH+TILrvs3LzKKwSyRsfTEAD/ZWA7jJgg7302Stjs lTRNsftAFdpxIuaXKnVKvmeMHVMHz8M+x3Mh9I2jTzMxvkErQaeHRNs7dLvBp6ujiJjG 00nA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699238665; x=1699843465; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=myyimfrqgp1IZWG/ujCwmnkNglmXYrRvMz92cYVRHqQ=; b=pDf2wVckkSfYuH+lrgwBBsWy7ZWVSQWz8eXS6zl3Ji5G77Gs5FghfNtBw1zlKQzXSi eYhlqbXTZaNfAMZgyP0wHiiElVq3by2ocNPNJn318+3FBAKHvJHBmBxWm9hRXdrrffPC fltb/uFYaIrH/q09ZtkX4KkvIsJhqFtD8ri/CO6tA+aB2hqh8hvmXHMljZGptmB0W9Kt LLMzyyr7LwCHGgL5exEw8DdLAlqM/TX13HiTsur79D0CkkOSCftma9QtoVaSqNJzheJ4 RVwlst96r774xgj9eaEb884OlIUeJWnGmSf/Hfv6B2qGVpFhb/y5+Vi7kpHjztCfKHhw PrgQ== X-Gm-Message-State: AOJu0Yxiu+N+dLrvpakPXcYImRLEc2MLpWshr1vU+qEJgmVxXywPHUa+ s7tz252WVHdnn1nrPxnURGd75x5yEuScsdntgQ== X-Google-Smtp-Source: AGHT+IGWp48Z8KmELXQxvxVV5DLFXiq5qCBokj5aX+KpMHAL69lZpLvJunFDekA59GS1FqFlT9Q2Vt+r+UX/F4+7uQ== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:35de:fff:97b7:db3e]) (user=almasrymina job=sendgmr) by 2002:a25:db11:0:b0:d9a:47ea:69a5 with SMTP id g17-20020a25db11000000b00d9a47ea69a5mr527645ybf.1.1699238664884; Sun, 05 Nov 2023 18:44:24 -0800 (PST) Date: Sun, 5 Nov 2023 18:44:02 -0800 In-Reply-To: <20231106024413.2801438-1-almasrymina@google.com> Mime-Version: 1.0 References: <20231106024413.2801438-1-almasrymina@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231106024413.2801438-4-almasrymina@google.com> Subject: [RFC PATCH v3 03/12] net: netdev netlink api to bind dma-buf to a net device From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Cc: Mina Almasry , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jesper Dangaard Brouer , Ilias Apalodimas , Arnd Bergmann , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi , Stanislav Fomichev Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org API takes the dma-buf fd as input, and binds it to the netdevice. The user can specify the rx queues to bind the dma-buf to. Suggested-by: Stanislav Fomichev Signed-off-by: Mina Almasry --- Changes in v3: - Support binding multiple rx rx-queues --- Documentation/netlink/specs/netdev.yaml | 28 +++++++++++++++ include/uapi/linux/netdev.h | 10 ++++++ net/core/netdev-genl-gen.c | 14 ++++++++ net/core/netdev-genl-gen.h | 1 + net/core/netdev-genl.c | 6 ++++ tools/include/uapi/linux/netdev.h | 10 ++++++ tools/net/ynl/generated/netdev-user.c | 42 ++++++++++++++++++++++ tools/net/ynl/generated/netdev-user.h | 47 +++++++++++++++++++++++++ 8 files changed, 158 insertions(+) diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml index 14511b13f305..2141c5f5c33e 100644 --- a/Documentation/netlink/specs/netdev.yaml +++ b/Documentation/netlink/specs/netdev.yaml @@ -86,6 +86,24 @@ attribute-sets: See Documentation/networking/xdp-rx-metadata.rst for more details. type: u64 enum: xdp-rx-metadata + - + name: bind-dmabuf + attributes: + - + name: ifindex + doc: netdev ifindex to bind the dma-buf to. + type: u32 + checks: + min: 1 + - + name: queues + doc: receive queues to bind the dma-buf to. + type: u32 + multi-attr: true + - + name: dmabuf-fd + doc: dmabuf file descriptor to bind. + type: u32 operations: list: @@ -120,6 +138,16 @@ operations: doc: Notification about device configuration being changed. notify: dev-get mcgrp: mgmt + - + name: bind-rx + doc: Bind dmabuf to netdev + attribute-set: bind-dmabuf + do: + request: + attributes: + - ifindex + - dmabuf-fd + - queues mcast-groups: list: diff --git a/include/uapi/linux/netdev.h b/include/uapi/linux/netdev.h index 2943a151d4f1..2cd367c498c7 100644 --- a/include/uapi/linux/netdev.h +++ b/include/uapi/linux/netdev.h @@ -64,11 +64,21 @@ enum { NETDEV_A_DEV_MAX = (__NETDEV_A_DEV_MAX - 1) }; +enum { + NETDEV_A_BIND_DMABUF_IFINDEX = 1, + NETDEV_A_BIND_DMABUF_QUEUES, + NETDEV_A_BIND_DMABUF_DMABUF_FD, + + __NETDEV_A_BIND_DMABUF_MAX, + NETDEV_A_BIND_DMABUF_MAX = (__NETDEV_A_BIND_DMABUF_MAX - 1) +}; + enum { NETDEV_CMD_DEV_GET = 1, NETDEV_CMD_DEV_ADD_NTF, NETDEV_CMD_DEV_DEL_NTF, NETDEV_CMD_DEV_CHANGE_NTF, + NETDEV_CMD_BIND_RX, __NETDEV_CMD_MAX, NETDEV_CMD_MAX = (__NETDEV_CMD_MAX - 1) diff --git a/net/core/netdev-genl-gen.c b/net/core/netdev-genl-gen.c index ea9231378aa6..58300efaf4e5 100644 --- a/net/core/netdev-genl-gen.c +++ b/net/core/netdev-genl-gen.c @@ -15,6 +15,13 @@ static const struct nla_policy netdev_dev_get_nl_policy[NETDEV_A_DEV_IFINDEX + 1 [NETDEV_A_DEV_IFINDEX] = NLA_POLICY_MIN(NLA_U32, 1), }; +/* NETDEV_CMD_BIND_RX - do */ +static const struct nla_policy netdev_bind_rx_nl_policy[NETDEV_A_BIND_DMABUF_DMABUF_FD + 1] = { + [NETDEV_A_BIND_DMABUF_IFINDEX] = NLA_POLICY_MIN(NLA_U32, 1), + [NETDEV_A_BIND_DMABUF_DMABUF_FD] = { .type = NLA_U32, }, + [NETDEV_A_BIND_DMABUF_QUEUES] = { .type = NLA_U32, }, +}; + /* Ops table for netdev */ static const struct genl_split_ops netdev_nl_ops[] = { { @@ -29,6 +36,13 @@ static const struct genl_split_ops netdev_nl_ops[] = { .dumpit = netdev_nl_dev_get_dumpit, .flags = GENL_CMD_CAP_DUMP, }, + { + .cmd = NETDEV_CMD_BIND_RX, + .doit = netdev_nl_bind_rx_doit, + .policy = netdev_bind_rx_nl_policy, + .maxattr = NETDEV_A_BIND_DMABUF_DMABUF_FD, + .flags = GENL_CMD_CAP_DO, + }, }; static const struct genl_multicast_group netdev_nl_mcgrps[] = { diff --git a/net/core/netdev-genl-gen.h b/net/core/netdev-genl-gen.h index 7b370c073e7d..5aaeb435ec08 100644 --- a/net/core/netdev-genl-gen.h +++ b/net/core/netdev-genl-gen.h @@ -13,6 +13,7 @@ int netdev_nl_dev_get_doit(struct sk_buff *skb, struct genl_info *info); int netdev_nl_dev_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb); +int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info); enum { NETDEV_NLGRP_MGMT, diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c index fe61f85bcf33..59d3d512d9cc 100644 --- a/net/core/netdev-genl.c +++ b/net/core/netdev-genl.c @@ -129,6 +129,12 @@ int netdev_nl_dev_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb) return skb->len; } +/* Stub */ +int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info) +{ + return 0; +} + static int netdev_genl_netdevice_event(struct notifier_block *nb, unsigned long event, void *ptr) { diff --git a/tools/include/uapi/linux/netdev.h b/tools/include/uapi/linux/netdev.h index 2943a151d4f1..2cd367c498c7 100644 --- a/tools/include/uapi/linux/netdev.h +++ b/tools/include/uapi/linux/netdev.h @@ -64,11 +64,21 @@ enum { NETDEV_A_DEV_MAX = (__NETDEV_A_DEV_MAX - 1) }; +enum { + NETDEV_A_BIND_DMABUF_IFINDEX = 1, + NETDEV_A_BIND_DMABUF_QUEUES, + NETDEV_A_BIND_DMABUF_DMABUF_FD, + + __NETDEV_A_BIND_DMABUF_MAX, + NETDEV_A_BIND_DMABUF_MAX = (__NETDEV_A_BIND_DMABUF_MAX - 1) +}; + enum { NETDEV_CMD_DEV_GET = 1, NETDEV_CMD_DEV_ADD_NTF, NETDEV_CMD_DEV_DEL_NTF, NETDEV_CMD_DEV_CHANGE_NTF, + NETDEV_CMD_BIND_RX, __NETDEV_CMD_MAX, NETDEV_CMD_MAX = (__NETDEV_CMD_MAX - 1) diff --git a/tools/net/ynl/generated/netdev-user.c b/tools/net/ynl/generated/netdev-user.c index b5ffe8cd1144..d5f4c6d4c2b2 100644 --- a/tools/net/ynl/generated/netdev-user.c +++ b/tools/net/ynl/generated/netdev-user.c @@ -18,6 +18,7 @@ static const char * const netdev_op_strmap[] = { [NETDEV_CMD_DEV_ADD_NTF] = "dev-add-ntf", [NETDEV_CMD_DEV_DEL_NTF] = "dev-del-ntf", [NETDEV_CMD_DEV_CHANGE_NTF] = "dev-change-ntf", + [NETDEV_CMD_BIND_RX] = "bind-rx", }; const char *netdev_op_str(int op) @@ -72,6 +73,17 @@ struct ynl_policy_nest netdev_dev_nest = { .table = netdev_dev_policy, }; +struct ynl_policy_attr netdev_bind_dmabuf_policy[NETDEV_A_BIND_DMABUF_MAX + 1] = { + [NETDEV_A_BIND_DMABUF_IFINDEX] = { .name = "ifindex", .type = YNL_PT_U32, }, + [NETDEV_A_BIND_DMABUF_QUEUES] = { .name = "queues", .type = YNL_PT_U32, }, + [NETDEV_A_BIND_DMABUF_DMABUF_FD] = { .name = "dmabuf-fd", .type = YNL_PT_U32, }, +}; + +struct ynl_policy_nest netdev_bind_dmabuf_nest = { + .max_attr = NETDEV_A_BIND_DMABUF_MAX, + .table = netdev_bind_dmabuf_policy, +}; + /* Common nested types */ /* ============== NETDEV_CMD_DEV_GET ============== */ /* NETDEV_CMD_DEV_GET - do */ @@ -197,6 +209,36 @@ void netdev_dev_get_ntf_free(struct netdev_dev_get_ntf *rsp) free(rsp); } +/* ============== NETDEV_CMD_BIND_RX ============== */ +/* NETDEV_CMD_BIND_RX - do */ +void netdev_bind_rx_req_free(struct netdev_bind_rx_req *req) +{ + free(req->queues); + free(req); +} + +int netdev_bind_rx(struct ynl_sock *ys, struct netdev_bind_rx_req *req) +{ + struct nlmsghdr *nlh; + int err; + + nlh = ynl_gemsg_start_req(ys, ys->family_id, NETDEV_CMD_BIND_RX, 1); + ys->req_policy = &netdev_bind_dmabuf_nest; + + if (req->_present.ifindex) + mnl_attr_put_u32(nlh, NETDEV_A_BIND_DMABUF_IFINDEX, req->ifindex); + if (req->_present.dmabuf_fd) + mnl_attr_put_u32(nlh, NETDEV_A_BIND_DMABUF_DMABUF_FD, req->dmabuf_fd); + for (unsigned int i = 0; i < req->n_queues; i++) + mnl_attr_put_u32(nlh, NETDEV_A_BIND_DMABUF_QUEUES, req->queues[i]); + + err = ynl_exec(ys, nlh, NULL); + if (err < 0) + return -1; + + return 0; +} + static const struct ynl_ntf_info netdev_ntf_info[] = { [NETDEV_CMD_DEV_ADD_NTF] = { .alloc_sz = sizeof(struct netdev_dev_get_ntf), diff --git a/tools/net/ynl/generated/netdev-user.h b/tools/net/ynl/generated/netdev-user.h index 4fafac879df3..3cf9096d733a 100644 --- a/tools/net/ynl/generated/netdev-user.h +++ b/tools/net/ynl/generated/netdev-user.h @@ -87,4 +87,51 @@ struct netdev_dev_get_ntf { void netdev_dev_get_ntf_free(struct netdev_dev_get_ntf *rsp); +/* ============== NETDEV_CMD_BIND_RX ============== */ +/* NETDEV_CMD_BIND_RX - do */ +struct netdev_bind_rx_req { + struct { + __u32 ifindex:1; + __u32 dmabuf_fd:1; + } _present; + + __u32 ifindex; + __u32 dmabuf_fd; + unsigned int n_queues; + __u32 *queues; +}; + +static inline struct netdev_bind_rx_req *netdev_bind_rx_req_alloc(void) +{ + return calloc(1, sizeof(struct netdev_bind_rx_req)); +} +void netdev_bind_rx_req_free(struct netdev_bind_rx_req *req); + +static inline void +netdev_bind_rx_req_set_ifindex(struct netdev_bind_rx_req *req, __u32 ifindex) +{ + req->_present.ifindex = 1; + req->ifindex = ifindex; +} +static inline void +netdev_bind_rx_req_set_dmabuf_fd(struct netdev_bind_rx_req *req, + __u32 dmabuf_fd) +{ + req->_present.dmabuf_fd = 1; + req->dmabuf_fd = dmabuf_fd; +} +static inline void +__netdev_bind_rx_req_set_queues(struct netdev_bind_rx_req *req, __u32 *queues, + unsigned int n_queues) +{ + free(req->queues); + req->queues = queues; + req->n_queues = n_queues; +} + +/* + * Bind dmabuf to netdev + */ +int netdev_bind_rx(struct ynl_sock *ys, struct netdev_bind_rx_req *req); + #endif /* _LINUX_NETDEV_GEN_H */ From patchwork Mon Nov 6 02:44:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13446233 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA159C4167D for ; Mon, 6 Nov 2023 02:44:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230292AbjKFCoy (ORCPT ); Sun, 5 Nov 2023 21:44:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45258 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229447AbjKFCod (ORCPT ); Sun, 5 Nov 2023 21:44:33 -0500 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E3324100 for ; Sun, 5 Nov 2023 18:44:27 -0800 (PST) Received: by mail-pg1-x549.google.com with SMTP id 41be03b00d2f7-5bd18d549bbso3566421a12.1 for ; Sun, 05 Nov 2023 18:44:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699238667; x=1699843467; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=LYspz1UCPlkFcqSl7poNxEIVxphi7wXN/DYo5EMn6TY=; b=uxK0RrhnXmCtHW7JzCAPp8JdYKYXx8E17Mwbcr4wTc3WzvlN2On6oc3u4qIk1r+Ew+ +04LNJuhLHPJxwmb8Dlr50ngzlYCHdZn7iWLEIYtL+1TflgZz/gWTvh5D2yoV35mYIlo B/xvmFZCd8Gp2HLqH3Ghs0M3aapDOjUCQqinrOVJDMptAxyM/8ol7KWGCet9PmmRIgnh OTEkhSulLrKzrwu2XzWKHh/l/DmDdVdYwFoZy7tBu51oCVzZIn2MMNrAwAzXd2pZCjpS bVV9HL2tExYRXiDMK6qVabhWb5++ypj2FFEgWNkPbMPxnbeVGhogR+8KMZssX3Ze+cf3 f9QQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699238667; x=1699843467; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=LYspz1UCPlkFcqSl7poNxEIVxphi7wXN/DYo5EMn6TY=; b=N/A71JjVA/qxdEY7Izu4rCESSApvNsMAVrH6dB1hrNdVgPFB5bUJhDnsdmXpPIIP2J /doMTlK2FUyxLVf0WBXTz0/Nk7vMaFnCFtmLI5LdAT5p/ZemhyCtKI+2wjb9BS3qX59I MaTOTN3OOzAFgeL/08/uZjkpXk7hiRCvu39PdqvAmbdA1byOejrWviYddsHC+nhabTTW hUDsSBvGJORSS127ZJ0AlBZSCJbTIQ3YFJlFKyKuejvVvQ/NFQLAaOzgCO9i9S0lRr2T o4BxA6PLKzGy+OOtB5ZqER2TWkpMA4SADZlqO4rh9wrtOMQ567+WXCNNkbJxC+ltE+0k LRWA== X-Gm-Message-State: AOJu0YxmH4l2JcXjGe9CokyQ7yosj1mDGRmpS/HFVCgdX2nnNtnf3757 SDxWfMtL3dMCG9Gw5W6R3akWN6fPxjd7PveIww== X-Google-Smtp-Source: AGHT+IHC7qtI5cNRWsxVgcu52vp2ItrRo7jya2WHOJYqVlpdeNGfzBJvWk9SLPP8GDrTXyl15tXdDGLuruhozOUZ2g== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:35de:fff:97b7:db3e]) (user=almasrymina job=sendgmr) by 2002:a63:3eca:0:b0:5ac:f96f:f87d with SMTP id l193-20020a633eca000000b005acf96ff87dmr478590pga.8.1699238667217; Sun, 05 Nov 2023 18:44:27 -0800 (PST) Date: Sun, 5 Nov 2023 18:44:03 -0800 In-Reply-To: <20231106024413.2801438-1-almasrymina@google.com> Mime-Version: 1.0 References: <20231106024413.2801438-1-almasrymina@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231106024413.2801438-5-almasrymina@google.com> Subject: [RFC PATCH v3 04/12] netdev: support binding dma-buf to netdevice From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Cc: Mina Almasry , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jesper Dangaard Brouer , Ilias Apalodimas , Arnd Bergmann , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi , Willem de Bruijn , Kaiyuan Zhang Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Add a netdev_dmabuf_binding struct which represents the dma-buf-to-netdevice binding. The netlink API will bind the dma-buf to rx queues on the netdevice. On the binding, the dma_buf_attach & dma_buf_map_attachment will occur. The entries in the sg_table from mapping will be inserted into a genpool to make it ready for allocation. The chunks in the genpool are owned by a dmabuf_chunk_owner struct which holds the dma-buf offset of the base of the chunk and the dma_addr of the chunk. Both are needed to use allocations that come from this chunk. We create a new type that represents an allocation from the genpool: page_pool_iov. We setup the page_pool_iov allocation size in the genpool to PAGE_SIZE for simplicity: to match the PAGE_SIZE normally allocated by the page pool and given to the drivers. The user can unbind the dmabuf from the netdevice by closing the netlink socket that established the binding. We do this so that the binding is automatically unbound even if the userspace process crashes. The binding and unbinding leaves an indicator in struct netdev_rx_queue that the given queue is bound, but the binding doesn't take effect until the driver actually reconfigures its queues, and re-initializes its page pool. The netdev_dmabuf_binding struct is refcounted, and releases its resources only when all the refs are released. Signed-off-by: Willem de Bruijn Signed-off-by: Kaiyuan Zhang Signed-off-by: Mina Almasry --- RFC v3: - Support multi rx-queue binding --- include/linux/netdevice.h | 80 ++++++++++++++ include/net/netdev_rx_queue.h | 1 + include/net/page_pool/types.h | 27 +++++ net/core/dev.c | 203 ++++++++++++++++++++++++++++++++++ net/core/netdev-genl.c | 116 ++++++++++++++++++- 5 files changed, 425 insertions(+), 2 deletions(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index b8bf669212cc..eeeda849115c 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -52,6 +52,8 @@ #include #include #include +#include +#include struct netpoll_info; struct device; @@ -808,6 +810,84 @@ bool rps_may_expire_flow(struct net_device *dev, u16 rxq_index, u32 flow_id, #endif #endif /* CONFIG_RPS */ +struct netdev_dmabuf_binding { + struct dma_buf *dmabuf; + struct dma_buf_attachment *attachment; + struct sg_table *sgt; + struct net_device *dev; + struct gen_pool *chunk_pool; + + /* The user holds a ref (via the netlink API) for as long as they want + * the binding to remain alive. Each page pool using this binding holds + * a ref to keep the binding alive. Each allocated page_pool_iov holds a + * ref. + * + * The binding undos itself and unmaps the underlying dmabuf once all + * those refs are dropped and the binding is no longer desired or in + * use. + */ + refcount_t ref; + + /* The portid of the user that owns this binding. Used for netlink to + * notify us of the user dropping the bind. + */ + u32 owner_nlportid; + + /* The list of bindings currently active. Used for netlink to notify us + * of the user dropping the bind. + */ + struct list_head list; + + /* rxq's this binding is active on. */ + struct xarray bound_rxq_list; +}; + +#ifdef CONFIG_DMA_SHARED_BUFFER +void __netdev_devmem_binding_free(struct netdev_dmabuf_binding *binding); +int netdev_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, + struct netdev_dmabuf_binding **out); +void netdev_unbind_dmabuf(struct netdev_dmabuf_binding *binding); +int netdev_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, + struct netdev_dmabuf_binding *binding); +#else +static inline void +__netdev_devmem_binding_free(struct netdev_dmabuf_binding *binding) +{ +} + +static inline int netdev_bind_dmabuf(struct net_device *dev, + unsigned int dmabuf_fd, + struct netdev_dmabuf_binding **out) +{ + return -EOPNOTSUPP; +} +static inline void netdev_unbind_dmabuf(struct netdev_dmabuf_binding *binding) +{ +} + +static inline int +netdev_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, + struct netdev_dmabuf_binding *binding) +{ + return -EOPNOTSUPP; +} +#endif + +static inline void +netdev_devmem_binding_get(struct netdev_dmabuf_binding *binding) +{ + refcount_inc(&binding->ref); +} + +static inline void +netdev_devmem_binding_put(struct netdev_dmabuf_binding *binding) +{ + if (!refcount_dec_and_test(&binding->ref)) + return; + + __netdev_devmem_binding_free(binding); +} + /* XPS map type and offset of the xps map within net_device->xps_maps[]. */ enum xps_map_type { XPS_CPUS = 0, diff --git a/include/net/netdev_rx_queue.h b/include/net/netdev_rx_queue.h index cdcafb30d437..1bfcf60a145d 100644 --- a/include/net/netdev_rx_queue.h +++ b/include/net/netdev_rx_queue.h @@ -21,6 +21,7 @@ struct netdev_rx_queue { #ifdef CONFIG_XDP_SOCKETS struct xsk_buff_pool *pool; #endif + struct netdev_dmabuf_binding *binding; } ____cacheline_aligned_in_smp; /* diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index d4bea053bb7e..64386325d965 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -133,6 +133,33 @@ struct pp_memory_provider_ops { bool (*release_page)(struct page_pool *pool, struct page *page); }; +/* page_pool_iov support */ + +/* Owner of the dma-buf chunks inserted into the gen pool. Each scatterlist + * entry from the dmabuf is inserted into the genpool as a chunk, and needs + * this owner struct to keep track of some metadata necessary to create + * allocations from this chunk. + */ +struct dmabuf_genpool_chunk_owner { + /* Offset into the dma-buf where this chunk starts. */ + unsigned long base_virtual; + + /* dma_addr of the start of the chunk. */ + dma_addr_t base_dma_addr; + + /* Array of page_pool_iovs for this chunk. */ + struct page_pool_iov *ppiovs; + size_t num_ppiovs; + + struct netdev_dmabuf_binding *binding; +}; + +struct page_pool_iov { + struct dmabuf_genpool_chunk_owner *owner; + + refcount_t refcount; +}; + struct page_pool { struct page_pool_params p; diff --git a/net/core/dev.c b/net/core/dev.c index a37a932a3e14..c8c3709d42c8 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -153,6 +153,9 @@ #include #include #include +#include +#include +#include #include "dev.h" #include "net-sysfs.h" @@ -2040,6 +2043,206 @@ static int call_netdevice_notifiers_mtu(unsigned long val, return call_netdevice_notifiers_info(val, &info.info); } +/* Device memory support */ + +#ifdef CONFIG_DMA_SHARED_BUFFER +static void netdev_devmem_free_chunk_owner(struct gen_pool *genpool, + struct gen_pool_chunk *chunk, + void *not_used) +{ + struct dmabuf_genpool_chunk_owner *owner = chunk->owner; + + kvfree(owner->ppiovs); + kfree(owner); +} + +void __netdev_devmem_binding_free(struct netdev_dmabuf_binding *binding) +{ + size_t size, avail; + + gen_pool_for_each_chunk(binding->chunk_pool, + netdev_devmem_free_chunk_owner, NULL); + + size = gen_pool_size(binding->chunk_pool); + avail = gen_pool_avail(binding->chunk_pool); + + if (!WARN(size != avail, "can't destroy genpool. size=%lu, avail=%lu", + size, avail)) + gen_pool_destroy(binding->chunk_pool); + + dma_buf_unmap_attachment(binding->attachment, binding->sgt, + DMA_BIDIRECTIONAL); + dma_buf_detach(binding->dmabuf, binding->attachment); + dma_buf_put(binding->dmabuf); + kfree(binding); +} + +void netdev_unbind_dmabuf(struct netdev_dmabuf_binding *binding) +{ + struct netdev_rx_queue *rxq; + unsigned long xa_idx; + + if (!binding) + return; + + list_del_rcu(&binding->list); + + xa_for_each(&binding->bound_rxq_list, xa_idx, rxq) + if (rxq->binding == binding) + /* We hold the rtnl_lock while binding/unbinding + * dma-buf, so we can't race with another thread that + * is also modifying this value. However, the driver + * may read this config while it's creating its + * rx-queues. WRITE_ONCE() here to match the + * READ_ONCE() in the driver. + */ + WRITE_ONCE(rxq->binding, NULL); + + netdev_devmem_binding_put(binding); +} + +int netdev_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, + struct netdev_dmabuf_binding *binding) +{ + struct netdev_rx_queue *rxq; + u32 xa_idx; + int err; + + rxq = __netif_get_rx_queue(dev, rxq_idx); + + if (rxq->binding) + return -EEXIST; + + err = xa_alloc(&binding->bound_rxq_list, &xa_idx, rxq, xa_limit_32b, + GFP_KERNEL); + if (err) + return err; + + /*We hold the rtnl_lock while binding/unbinding dma-buf, so we can't + * race with another thread that is also modifying this value. However, + * the driver may read this config while it's creating its * rx-queues. + * WRITE_ONCE() here to match the READ_ONCE() in the driver. + */ + WRITE_ONCE(rxq->binding, binding); + + return 0; +} + +int netdev_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, + struct netdev_dmabuf_binding **out) +{ + struct netdev_dmabuf_binding *binding; + struct scatterlist *sg; + struct dma_buf *dmabuf; + unsigned int sg_idx, i; + unsigned long virtual; + int err; + + if (!capable(CAP_NET_ADMIN)) + return -EPERM; + + dmabuf = dma_buf_get(dmabuf_fd); + if (IS_ERR_OR_NULL(dmabuf)) + return -EBADFD; + + binding = kzalloc_node(sizeof(*binding), GFP_KERNEL, + dev_to_node(&dev->dev)); + if (!binding) { + err = -ENOMEM; + goto err_put_dmabuf; + } + + xa_init_flags(&binding->bound_rxq_list, XA_FLAGS_ALLOC); + + refcount_set(&binding->ref, 1); + + binding->dmabuf = dmabuf; + + binding->attachment = dma_buf_attach(binding->dmabuf, dev->dev.parent); + if (IS_ERR(binding->attachment)) { + err = PTR_ERR(binding->attachment); + goto err_free_binding; + } + + binding->sgt = dma_buf_map_attachment(binding->attachment, + DMA_BIDIRECTIONAL); + if (IS_ERR(binding->sgt)) { + err = PTR_ERR(binding->sgt); + goto err_detach; + } + + /* For simplicity we expect to make PAGE_SIZE allocations, but the + * binding can be much more flexible than that. We may be able to + * allocate MTU sized chunks here. Leave that for future work... + */ + binding->chunk_pool = gen_pool_create(PAGE_SHIFT, + dev_to_node(&dev->dev)); + if (!binding->chunk_pool) { + err = -ENOMEM; + goto err_unmap; + } + + virtual = 0; + for_each_sgtable_dma_sg(binding->sgt, sg, sg_idx) { + dma_addr_t dma_addr = sg_dma_address(sg); + struct dmabuf_genpool_chunk_owner *owner; + size_t len = sg_dma_len(sg); + struct page_pool_iov *ppiov; + + owner = kzalloc_node(sizeof(*owner), GFP_KERNEL, + dev_to_node(&dev->dev)); + owner->base_virtual = virtual; + owner->base_dma_addr = dma_addr; + owner->num_ppiovs = len / PAGE_SIZE; + owner->binding = binding; + + err = gen_pool_add_owner(binding->chunk_pool, dma_addr, + dma_addr, len, dev_to_node(&dev->dev), + owner); + if (err) { + err = -EINVAL; + goto err_free_chunks; + } + + owner->ppiovs = kvmalloc_array(owner->num_ppiovs, + sizeof(*owner->ppiovs), + GFP_KERNEL); + if (!owner->ppiovs) { + err = -ENOMEM; + goto err_free_chunks; + } + + for (i = 0; i < owner->num_ppiovs; i++) { + ppiov = &owner->ppiovs[i]; + ppiov->owner = owner; + refcount_set(&ppiov->refcount, 1); + } + + dma_addr += len; + virtual += len; + } + + *out = binding; + + return 0; + +err_free_chunks: + gen_pool_for_each_chunk(binding->chunk_pool, + netdev_devmem_free_chunk_owner, NULL); + gen_pool_destroy(binding->chunk_pool); +err_unmap: + dma_buf_unmap_attachment(binding->attachment, binding->sgt, + DMA_BIDIRECTIONAL); +err_detach: + dma_buf_detach(dmabuf, binding->attachment); +err_free_binding: + kfree(binding); +err_put_dmabuf: + dma_buf_put(dmabuf); + return err; +} +#endif + #ifdef CONFIG_NET_INGRESS static DEFINE_STATIC_KEY_FALSE(ingress_needed_key); diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c index 59d3d512d9cc..2c2a62593217 100644 --- a/net/core/netdev-genl.c +++ b/net/core/netdev-genl.c @@ -129,10 +129,89 @@ int netdev_nl_dev_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb) return skb->len; } -/* Stub */ +static LIST_HEAD(netdev_rbinding_list); + int netdev_nl_bind_rx_doit(struct sk_buff *skb, struct genl_info *info) { - return 0; + struct netdev_dmabuf_binding *out_binding; + u32 ifindex, dmabuf_fd, rxq_idx; + struct net_device *netdev; + struct sk_buff *rsp; + int rem, err = 0; + void *hdr; + struct nlattr *attr; + + if (GENL_REQ_ATTR_CHECK(info, NETDEV_A_DEV_IFINDEX) || + GENL_REQ_ATTR_CHECK(info, NETDEV_A_BIND_DMABUF_DMABUF_FD) || + GENL_REQ_ATTR_CHECK(info, NETDEV_A_BIND_DMABUF_QUEUES)) + return -EINVAL; + + ifindex = nla_get_u32(info->attrs[NETDEV_A_DEV_IFINDEX]); + dmabuf_fd = nla_get_u32(info->attrs[NETDEV_A_BIND_DMABUF_DMABUF_FD]); + + rtnl_lock(); + + netdev = __dev_get_by_index(genl_info_net(info), ifindex); + if (!netdev) { + err = -ENODEV; + goto err_unlock; + } + + err = netdev_bind_dmabuf(netdev, dmabuf_fd, &out_binding); + if (err) + goto err_unlock; + + nla_for_each_attr(attr, genlmsg_data(info->genlhdr), + genlmsg_len(info->genlhdr), rem) { + switch (nla_type(attr)) { + case NETDEV_A_BIND_DMABUF_QUEUES: + rxq_idx = nla_get_u32(attr); + + if (rxq_idx >= netdev->num_rx_queues) { + err = -ERANGE; + goto err_unbind; + } + + err = netdev_bind_dmabuf_to_queue(netdev, rxq_idx, + out_binding); + if (err) + goto err_unbind; + + break; + default: + break; + } + } + + out_binding->owner_nlportid = info->snd_portid; + list_add_rcu(&out_binding->list, &netdev_rbinding_list); + + rsp = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL); + if (!rsp) { + err = -ENOMEM; + goto err_unbind; + } + + hdr = genlmsg_put(rsp, info->snd_portid, info->snd_seq, + &netdev_nl_family, 0, info->genlhdr->cmd); + if (!hdr) { + err = -EMSGSIZE; + goto err_genlmsg_free; + } + + genlmsg_end(rsp, hdr); + + rtnl_unlock(); + + return genlmsg_reply(rsp, info); + +err_genlmsg_free: + nlmsg_free(rsp); +err_unbind: + netdev_unbind_dmabuf(out_binding); +err_unlock: + rtnl_unlock(); + return err; } static int netdev_genl_netdevice_event(struct notifier_block *nb, @@ -155,10 +234,37 @@ static int netdev_genl_netdevice_event(struct notifier_block *nb, return NOTIFY_OK; } +static int netdev_netlink_notify(struct notifier_block *nb, unsigned long state, + void *_notify) +{ + struct netlink_notify *notify = _notify; + struct netdev_dmabuf_binding *rbinding; + + if (state != NETLINK_URELEASE || notify->protocol != NETLINK_GENERIC) + return NOTIFY_DONE; + + rcu_read_lock(); + + list_for_each_entry_rcu(rbinding, &netdev_rbinding_list, list) { + if (rbinding->owner_nlportid == notify->portid) { + netdev_unbind_dmabuf(rbinding); + break; + } + } + + rcu_read_unlock(); + + return NOTIFY_OK; +} + static struct notifier_block netdev_genl_nb = { .notifier_call = netdev_genl_netdevice_event, }; +static struct notifier_block netdev_netlink_notifier = { + .notifier_call = netdev_netlink_notify, +}; + static int __init netdev_genl_init(void) { int err; @@ -171,8 +277,14 @@ static int __init netdev_genl_init(void) if (err) goto err_unreg_ntf; + err = netlink_register_notifier(&netdev_netlink_notifier); + if (err) + goto err_unreg_family; + return 0; +err_unreg_family: + genl_unregister_family(&netdev_nl_family); err_unreg_ntf: unregister_netdevice_notifier(&netdev_genl_nb); return err; From patchwork Mon Nov 6 02:44:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13446236 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7DFA4C001B5 for ; Mon, 6 Nov 2023 02:45:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230217AbjKFCpB (ORCPT ); Sun, 5 Nov 2023 21:45:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41304 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230247AbjKFCow (ORCPT ); Sun, 5 Nov 2023 21:44:52 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 415C1134 for ; Sun, 5 Nov 2023 18:44:30 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-d99ec34829aso4628683276.1 for ; Sun, 05 Nov 2023 18:44:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699238669; x=1699843469; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zCln+P1ZLV/9SicyKW3j3Qj5ciapVl1fP7AJwqOPEjI=; b=YM1SEkq3uLkxlOuZL1VwFYmCBgjwqqpM8fBxRsaBTEy60dJ8XtbpbtQaaAxjD7HbFu g52d2hu5Vg6ifDN7YHs32I9K3GIv2Qd5wLNhAH/diZUToVYlgJu/OyTZOpJSxXcCAzvt Gwi1PZolJ8PvTWdsSKlJDDCzw4GNwXv0AN7cWW012U6w7cq0Q87y9MAbaHHV0EgpPNPe cBQpVXfq6A0DlhOT47C7P/06a3wQDPhtlJY4+II2nLCjUusdZOZgjsvTkjZaIAlQDZPu EXQjGMy9+0bXLt2elR/AhrrHlV3AaGPcasehjxjy1yTkvXTC6SqNSuFWE2hlNZVrl9rg SIqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699238669; x=1699843469; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zCln+P1ZLV/9SicyKW3j3Qj5ciapVl1fP7AJwqOPEjI=; b=ScnOmwbm9b+W6hDF0K7Of966PKkoW9b1PHGDo3OGs5vJ6xzHi//1PlZyh0sp3BO+ix 64GeMfwdTgefsjKmv7lBC6E+fiLWjyfU4emsfCSv28x8cJzIl2K8HLuNiJJNaft12VD7 kKQd/fzXZLKUBm5JVPOzuCpYz2gahnyjTt4XoeWxRT83/vsRhHS/iGhTw0xcYftGlPPs hF1sqgTBIWHqrmi14aMUJx0Kekgu2a3mxH2QqMYPZVmf5yXTp+ag9RbJ+ZHzCipOiu65 ZSpQiVymNauRyN462e/g32h2OYAWwecWiNAJeRSEW78TKwOtUgaqxxTP+LiTyKDyILis 8EnA== X-Gm-Message-State: AOJu0YyIclL/2DZTwOaqciOyvAZHPR8SMFsRj0pjzwxDM7Y59l1WRnPi h4hZnPvOxefni98wrlFpbhuipU2EQ1d7o6JhFA== X-Google-Smtp-Source: AGHT+IFZen1iqDzFOOtrS1FJ3xpxQBaOHPO073Jp0lQAOyUF53fYpT9RjFB9mOvm7AmbRG1CjowhqVQUNAxlAUzmOA== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:35de:fff:97b7:db3e]) (user=almasrymina job=sendgmr) by 2002:a25:7909:0:b0:da3:ab41:304a with SMTP id u9-20020a257909000000b00da3ab41304amr308351ybc.4.1699238669418; Sun, 05 Nov 2023 18:44:29 -0800 (PST) Date: Sun, 5 Nov 2023 18:44:04 -0800 In-Reply-To: <20231106024413.2801438-1-almasrymina@google.com> Mime-Version: 1.0 References: <20231106024413.2801438-1-almasrymina@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231106024413.2801438-6-almasrymina@google.com> Subject: [RFC PATCH v3 05/12] netdev: netdevice devmem allocator From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Cc: Mina Almasry , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jesper Dangaard Brouer , Ilias Apalodimas , Arnd Bergmann , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi , Willem de Bruijn , Kaiyuan Zhang Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Implement netdev devmem allocator. The allocator takes a given struct netdev_dmabuf_binding as input and allocates page_pool_iov from that binding. The allocation simply delegates to the binding's genpool for the allocation logic and wraps the returned memory region in a page_pool_iov struct. page_pool_iov are refcounted and are freed back to the binding when the refcount drops to 0. Signed-off-by: Willem de Bruijn Signed-off-by: Kaiyuan Zhang Signed-off-by: Mina Almasry --- include/linux/netdevice.h | 13 ++++++++++++ include/net/page_pool/helpers.h | 28 +++++++++++++++++++++++++ net/core/dev.c | 37 +++++++++++++++++++++++++++++++++ 3 files changed, 78 insertions(+) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index eeeda849115c..1c351c138a5b 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -843,6 +843,9 @@ struct netdev_dmabuf_binding { }; #ifdef CONFIG_DMA_SHARED_BUFFER +struct page_pool_iov * +netdev_alloc_devmem(struct netdev_dmabuf_binding *binding); +void netdev_free_devmem(struct page_pool_iov *ppiov); void __netdev_devmem_binding_free(struct netdev_dmabuf_binding *binding); int netdev_bind_dmabuf(struct net_device *dev, unsigned int dmabuf_fd, struct netdev_dmabuf_binding **out); @@ -850,6 +853,16 @@ void netdev_unbind_dmabuf(struct netdev_dmabuf_binding *binding); int netdev_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, struct netdev_dmabuf_binding *binding); #else +static inline struct page_pool_iov * +netdev_alloc_devmem(struct netdev_dmabuf_binding *binding) +{ + return NULL; +} + +static inline void netdev_free_devmem(struct page_pool_iov *ppiov) +{ +} + static inline void __netdev_devmem_binding_free(struct netdev_dmabuf_binding *binding) { diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index 4ebd544ae977..78cbb040af94 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -83,6 +83,34 @@ static inline u64 *page_pool_ethtool_stats_get(u64 *data, void *stats) } #endif +/* page_pool_iov support */ + +static inline struct dmabuf_genpool_chunk_owner * +page_pool_iov_owner(const struct page_pool_iov *ppiov) +{ + return ppiov->owner; +} + +static inline unsigned int page_pool_iov_idx(const struct page_pool_iov *ppiov) +{ + return ppiov - page_pool_iov_owner(ppiov)->ppiovs; +} + +static inline dma_addr_t +page_pool_iov_dma_addr(const struct page_pool_iov *ppiov) +{ + struct dmabuf_genpool_chunk_owner *owner = page_pool_iov_owner(ppiov); + + return owner->base_dma_addr + + ((dma_addr_t)page_pool_iov_idx(ppiov) << PAGE_SHIFT); +} + +static inline struct netdev_dmabuf_binding * +page_pool_iov_binding(const struct page_pool_iov *ppiov) +{ + return page_pool_iov_owner(ppiov)->binding; +} + /** * page_pool_dev_alloc_pages() - allocate a page. * @pool: pool from which to allocate diff --git a/net/core/dev.c b/net/core/dev.c index c8c3709d42c8..2315bbc03ec8 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -156,6 +156,7 @@ #include #include #include +#include #include "dev.h" #include "net-sysfs.h" @@ -2077,6 +2078,42 @@ void __netdev_devmem_binding_free(struct netdev_dmabuf_binding *binding) kfree(binding); } +struct page_pool_iov *netdev_alloc_devmem(struct netdev_dmabuf_binding *binding) +{ + struct dmabuf_genpool_chunk_owner *owner; + struct page_pool_iov *ppiov; + unsigned long dma_addr; + ssize_t offset; + ssize_t index; + + dma_addr = gen_pool_alloc_owner(binding->chunk_pool, PAGE_SIZE, + (void **)&owner); + if (!dma_addr) + return NULL; + + offset = dma_addr - owner->base_dma_addr; + index = offset / PAGE_SIZE; + ppiov = &owner->ppiovs[index]; + + netdev_devmem_binding_get(binding); + + return ppiov; +} + +void netdev_free_devmem(struct page_pool_iov *ppiov) +{ + struct netdev_dmabuf_binding *binding = page_pool_iov_binding(ppiov); + + refcount_set(&ppiov->refcount, 1); + + if (gen_pool_has_addr(binding->chunk_pool, + page_pool_iov_dma_addr(ppiov), PAGE_SIZE)) + gen_pool_free(binding->chunk_pool, + page_pool_iov_dma_addr(ppiov), PAGE_SIZE); + + netdev_devmem_binding_put(binding); +} + void netdev_unbind_dmabuf(struct netdev_dmabuf_binding *binding) { struct netdev_rx_queue *rxq; From patchwork Mon Nov 6 02:44:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13446235 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F77AC4167D for ; Mon, 6 Nov 2023 02:45:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230174AbjKFCpA (ORCPT ); Sun, 5 Nov 2023 21:45:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41350 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230268AbjKFCox (ORCPT ); Sun, 5 Nov 2023 21:44:53 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9FE821B2 for ; Sun, 5 Nov 2023 18:44:32 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5b31e000e97so55100517b3.1 for ; Sun, 05 Nov 2023 18:44:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699238672; x=1699843472; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9xKJa4I8M00nBYKJxGW44sf+c8CRhaACyIJxG3DJfPA=; b=kPU0CZN9YttlE3LT7KY9sbebcPDKJagb8NoBLKYEmq1Q8hcYA7FwdNfhmolbqz6x90 OFKb+ubxZ0h0pVp5SQKigLP0EZB1fv6Uvcdt6b1bR39oH1/5VhOp5oUKJBz9KgYHPbDq mZsz6oTSAkvgH1+JZYgRx/aQxrZhxqWVK1H472uG8gAP4Ev1v5c6u/GgWaBZ/XO82G+e +DVausM/OLLfTmAI2Ia/8L6sWmeWzOhH7Tnq07ZZAjCZWzHNSxnOPUUZh9nUXHm7FG8B R5AHvHMgv3W1zOE92a/unhROT17QDEJ6AwBvY9jTvyCfSJUzbe3cMhiNf5Ul76RIIE/h Lp9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699238672; x=1699843472; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9xKJa4I8M00nBYKJxGW44sf+c8CRhaACyIJxG3DJfPA=; b=o1+yOfreDuQpvzqpo6RHpaumkR+1TRXmE70tT8Rt0LWcenJjCbfo3hxd4vFe9cGyrV fgsCJDWpHz8qSh7xH+3EMb6Lx6Evba2fG9apJ61paDwfF6Xh5uZdSSe0gJREdSps6dKs VXyWmGYnxWYZi3VIOuEaB8oDdp8gmsdz4G5zjpXhxUJxtvnKpr/QNSKzKHNhuzrCRNHq 17diFQ0z5Ttd9ACVd69aKYS4B80FK0PLv3buEuLlQkbmBsrobRQSDQuN3JNh2sa05TIG j4nfJZeLUAWj5F9mRjm0MJ0aR6c90O/5RPzqjHJfH8YVdS3fIf8KFX+6G8f2bn4S8Zdz mPVQ== X-Gm-Message-State: AOJu0Yyfi5wtNQ/RULlZsuawxczuKenFhal9IGpAdqiAUN2fpAtzXCQE veWww6uFnTDmLdo/qSvsf9Z48YrPe6H10Wd84Q== X-Google-Smtp-Source: AGHT+IHQZek7gksg2/4l9r8mKorM1hefc52r/xXnPMd1/CJoSrUbDmmvNsfJ2kDrXMdiPilQpsmvFoDWEPDC3BTnqw== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:35de:fff:97b7:db3e]) (user=almasrymina job=sendgmr) by 2002:a81:914a:0:b0:5a8:2fb9:aeab with SMTP id i71-20020a81914a000000b005a82fb9aeabmr164143ywg.3.1699238671757; Sun, 05 Nov 2023 18:44:31 -0800 (PST) Date: Sun, 5 Nov 2023 18:44:05 -0800 In-Reply-To: <20231106024413.2801438-1-almasrymina@google.com> Mime-Version: 1.0 References: <20231106024413.2801438-1-almasrymina@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231106024413.2801438-7-almasrymina@google.com> Subject: [RFC PATCH v3 06/12] memory-provider: dmabuf devmem memory provider From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Cc: Mina Almasry , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jesper Dangaard Brouer , Ilias Apalodimas , Arnd Bergmann , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi , Willem de Bruijn , Kaiyuan Zhang Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Implement a memory provider that allocates dmabuf devmem page_pool_iovs. Support of PP_FLAG_DMA_MAP and PP_FLAG_DMA_SYNC_DEV is omitted for simplicity. The provider receives a reference to the struct netdev_dmabuf_binding via the pool->mp_priv pointer. The driver needs to set this pointer for the provider in the page_pool_params. The provider obtains a reference on the netdev_dmabuf_binding which guarantees the binding and the underlying mapping remains alive until the provider is destroyed. Signed-off-by: Willem de Bruijn Signed-off-by: Kaiyuan Zhang Signed-off-by: Mina Almasry --- include/net/page_pool/helpers.h | 40 +++++++++++++++++ include/net/page_pool/types.h | 10 +++++ net/core/page_pool.c | 76 +++++++++++++++++++++++++++++++++ 3 files changed, 126 insertions(+) diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index 78cbb040af94..b93243c2a640 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -53,6 +53,7 @@ #define _NET_PAGE_POOL_HELPERS_H #include +#include #ifdef CONFIG_PAGE_POOL_STATS int page_pool_ethtool_stats_get_count(void); @@ -111,6 +112,45 @@ page_pool_iov_binding(const struct page_pool_iov *ppiov) return page_pool_iov_owner(ppiov)->binding; } +static inline int page_pool_iov_refcount(const struct page_pool_iov *ppiov) +{ + return refcount_read(&ppiov->refcount); +} + +static inline void page_pool_iov_get_many(struct page_pool_iov *ppiov, + unsigned int count) +{ + refcount_add(count, &ppiov->refcount); +} + +void __page_pool_iov_free(struct page_pool_iov *ppiov); + +static inline void page_pool_iov_put_many(struct page_pool_iov *ppiov, + unsigned int count) +{ + if (!refcount_sub_and_test(count, &ppiov->refcount)) + return; + + __page_pool_iov_free(ppiov); +} + +/* page pool mm helpers */ + +static inline bool page_is_page_pool_iov(const struct page *page) +{ + return (unsigned long)page & PP_DEVMEM; +} + +static inline struct page_pool_iov *page_to_page_pool_iov(struct page *page) +{ + if (page_is_page_pool_iov(page)) + return (struct page_pool_iov *)((unsigned long)page & + ~PP_DEVMEM); + + DEBUG_NET_WARN_ON_ONCE(true); + return NULL; +} + /** * page_pool_dev_alloc_pages() - allocate a page. * @pool: pool from which to allocate diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 64386325d965..1e67f9466250 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -124,6 +124,7 @@ struct mem_provider; enum pp_memory_provider_type { __PP_MP_NONE, /* Use system allocator directly */ + PP_MP_DMABUF_DEVMEM, /* dmabuf devmem provider */ }; struct pp_memory_provider_ops { @@ -133,8 +134,15 @@ struct pp_memory_provider_ops { bool (*release_page)(struct page_pool *pool, struct page *page); }; +extern const struct pp_memory_provider_ops dmabuf_devmem_ops; + /* page_pool_iov support */ +/* We overload the LSB of the struct page pointer to indicate whether it's + * a page or page_pool_iov. + */ +#define PP_DEVMEM 0x01UL + /* Owner of the dma-buf chunks inserted into the gen pool. Each scatterlist * entry from the dmabuf is inserted into the genpool as a chunk, and needs * this owner struct to keep track of some metadata necessary to create @@ -158,6 +166,8 @@ struct page_pool_iov { struct dmabuf_genpool_chunk_owner *owner; refcount_t refcount; + + struct page_pool *pp; }; struct page_pool { diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 7ea1f4682479..138ddea0b28f 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -20,6 +20,7 @@ #include #include #include +#include #include @@ -231,6 +232,9 @@ static int page_pool_init(struct page_pool *pool, switch (pool->p.memory_provider) { case __PP_MP_NONE: break; + case PP_MP_DMABUF_DEVMEM: + pool->mp_ops = &dmabuf_devmem_ops; + break; default: err = -EINVAL; goto free_ptr_ring; @@ -996,3 +1000,75 @@ void page_pool_update_nid(struct page_pool *pool, int new_nid) } } EXPORT_SYMBOL(page_pool_update_nid); + +void __page_pool_iov_free(struct page_pool_iov *ppiov) +{ + if (ppiov->pp->mp_ops != &dmabuf_devmem_ops) + return; + + netdev_free_devmem(ppiov); +} +EXPORT_SYMBOL_GPL(__page_pool_iov_free); + +/*** "Dmabuf devmem memory provider" ***/ + +static int mp_dmabuf_devmem_init(struct page_pool *pool) +{ + struct netdev_dmabuf_binding *binding = pool->mp_priv; + + if (!binding) + return -EINVAL; + + if (pool->p.flags & PP_FLAG_DMA_MAP || + pool->p.flags & PP_FLAG_DMA_SYNC_DEV) + return -EOPNOTSUPP; + + netdev_devmem_binding_get(binding); + return 0; +} + +static struct page *mp_dmabuf_devmem_alloc_pages(struct page_pool *pool, + gfp_t gfp) +{ + struct netdev_dmabuf_binding *binding = pool->mp_priv; + struct page_pool_iov *ppiov; + + ppiov = netdev_alloc_devmem(binding); + if (!ppiov) + return NULL; + + ppiov->pp = pool; + pool->pages_state_hold_cnt++; + trace_page_pool_state_hold(pool, (struct page *)ppiov, + pool->pages_state_hold_cnt); + return (struct page *)((unsigned long)ppiov | PP_DEVMEM); +} + +static void mp_dmabuf_devmem_destroy(struct page_pool *pool) +{ + struct netdev_dmabuf_binding *binding = pool->mp_priv; + + netdev_devmem_binding_put(binding); +} + +static bool mp_dmabuf_devmem_release_page(struct page_pool *pool, + struct page *page) +{ + struct page_pool_iov *ppiov; + + if (WARN_ON_ONCE(!page_is_page_pool_iov(page))) + return false; + + ppiov = page_to_page_pool_iov(page); + page_pool_iov_put_many(ppiov, 1); + /* We don't want the page pool put_page()ing our page_pool_iovs. */ + return false; +} + +const struct pp_memory_provider_ops dmabuf_devmem_ops = { + .init = mp_dmabuf_devmem_init, + .destroy = mp_dmabuf_devmem_destroy, + .alloc_pages = mp_dmabuf_devmem_alloc_pages, + .release_page = mp_dmabuf_devmem_release_page, +}; +EXPORT_SYMBOL(dmabuf_devmem_ops); From patchwork Mon Nov 6 02:44:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13446234 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62692C001DD for ; Mon, 6 Nov 2023 02:44:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230367AbjKFCo5 (ORCPT ); Sun, 5 Nov 2023 21:44:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41386 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230305AbjKFCoy (ORCPT ); Sun, 5 Nov 2023 21:44:54 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 91526D67 for ; Sun, 5 Nov 2023 18:44:34 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-da0c4ab85e2so2276409276.2 for ; Sun, 05 Nov 2023 18:44:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699238673; x=1699843473; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=NQu48K4LNQ8WFJH6/Q5I7aNtbI5kIYTQfwkiquHqTME=; b=N5RhBrIqL6fIiMOsYG8SLCQWLyW4c0MlgoRmV8kWl/TlVyQadZXrEIdXt2DxU+M47Y 76dp3mFzyiAE/FTCdMcup+83CQOgNE8i44gvvr1W0YJG8NkOUQMDsMRxunjeewFr/8e3 JAahbR8rhWaykQEJLAHjzvxUAJ4pkZo+HPwjlMYz/+ixIU1UMQe2I/7Gfn+Q/BJjo7KZ +5lqMiL925kkqlXs2VwoAyLlOYjjy+MZg2UOlZeg9XGYjeFI8C0QjqF9DZ3eTAHpopb0 1bctWyNN21f05DffBSO2iY3a4JE+ZwPzvwLzt8Eqbylrxwf7ZGBtlwidyhLJXgFiY6Jl UxLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699238673; x=1699843473; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NQu48K4LNQ8WFJH6/Q5I7aNtbI5kIYTQfwkiquHqTME=; b=KU/1dR656dFDP8pncK/JNt/DtKeVHtaTexLixMAm5Uk9F9sFnzFOSfVdgZNQpongFO YZlJmV6UM8NDLUrpyCd6HQ2OwYlJ1mWs+jbgvG4YRG/7MGw1Mw71MdgSpKchLYpKACQC yQHdzssrg1TOOhOwcEsxmqhUPKDAU4dM0tW7lJ+k4cUl1017p5608RKhKyF0pU/9nDKc zNClvjpCQVFOL+hbBbrswnRoIBNf32cIL7KPdbX8e+PrVNf47dO+e9HUGJ3NbefGdO81 nS3vDHUcE8dRpa6k1rCdDkQR7Iv2kyeMp/aBlkQI6gvYzZkOAkpx/5X0VbCALtWLGV1N Xs0g== X-Gm-Message-State: AOJu0YyO+c6SRjpC3p3QyDvh+N0Y7ZGUAHEILe42zGRhsA1KqZqQQC4v T0WfXKONc2tKa5p8iBmtrYtFuhhcyxZLMvF03A== X-Google-Smtp-Source: AGHT+IEZMbWQfzrxoPZHwOWR4BvATqps1Lm805Xf6dRuk23Kr0wtrH56ZlHocpW6kuXWCf2FI0sYhcvf7cTOroJczw== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:35de:fff:97b7:db3e]) (user=almasrymina job=sendgmr) by 2002:a25:d785:0:b0:da0:5af7:d51a with SMTP id o127-20020a25d785000000b00da05af7d51amr582834ybg.0.1699238673723; Sun, 05 Nov 2023 18:44:33 -0800 (PST) Date: Sun, 5 Nov 2023 18:44:06 -0800 In-Reply-To: <20231106024413.2801438-1-almasrymina@google.com> Mime-Version: 1.0 References: <20231106024413.2801438-1-almasrymina@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231106024413.2801438-8-almasrymina@google.com> Subject: [RFC PATCH v3 07/12] page-pool: device memory support From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Cc: Mina Almasry , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jesper Dangaard Brouer , Ilias Apalodimas , Arnd Bergmann , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Overload the LSB of struct page* to indicate that it's a page_pool_iov. Refactor mm calls on struct page* into helpers, and add page_pool_iov handling on those helpers. Modify callers of these mm APIs with calls to these helpers instead. In areas where struct page* is dereferenced, add a check for special handling of page_pool_iov. Signed-off-by: Mina Almasry --- include/net/page_pool/helpers.h | 74 ++++++++++++++++++++++++++++++++- net/core/page_pool.c | 63 ++++++++++++++++++++-------- 2 files changed, 118 insertions(+), 19 deletions(-) diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index b93243c2a640..08f1a2cc70d2 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -151,6 +151,64 @@ static inline struct page_pool_iov *page_to_page_pool_iov(struct page *page) return NULL; } +static inline int page_pool_page_ref_count(struct page *page) +{ + if (page_is_page_pool_iov(page)) + return page_pool_iov_refcount(page_to_page_pool_iov(page)); + + return page_ref_count(page); +} + +static inline void page_pool_page_get_many(struct page *page, + unsigned int count) +{ + if (page_is_page_pool_iov(page)) + return page_pool_iov_get_many(page_to_page_pool_iov(page), + count); + + return page_ref_add(page, count); +} + +static inline void page_pool_page_put_many(struct page *page, + unsigned int count) +{ + if (page_is_page_pool_iov(page)) + return page_pool_iov_put_many(page_to_page_pool_iov(page), + count); + + if (count > 1) + page_ref_sub(page, count - 1); + + put_page(page); +} + +static inline bool page_pool_page_is_pfmemalloc(struct page *page) +{ + if (page_is_page_pool_iov(page)) + return false; + + return page_is_pfmemalloc(page); +} + +static inline bool page_pool_page_is_pref_nid(struct page *page, int pref_nid) +{ + /* Assume page_pool_iov are on the preferred node without actually + * checking... + * + * This check is only used to check for recycling memory in the page + * pool's fast paths. Currently the only implementation of page_pool_iov + * is dmabuf device memory. It's a deliberate decision by the user to + * bind a certain dmabuf to a certain netdev, and the netdev rx queue + * would not be able to reallocate memory from another dmabuf that + * exists on the preferred node, so, this check doesn't make much sense + * in this case. Assume all page_pool_iovs can be recycled for now. + */ + if (page_is_page_pool_iov(page)) + return true; + + return page_to_nid(page) == pref_nid; +} + /** * page_pool_dev_alloc_pages() - allocate a page. * @pool: pool from which to allocate @@ -301,6 +359,9 @@ static inline long page_pool_defrag_page(struct page *page, long nr) { long ret; + if (page_is_page_pool_iov(page)) + return -EINVAL; + /* If nr == pp_frag_count then we have cleared all remaining * references to the page: * 1. 'n == 1': no need to actually overwrite it. @@ -431,7 +492,12 @@ static inline void page_pool_free_va(struct page_pool *pool, void *va, */ static inline dma_addr_t page_pool_get_dma_addr(struct page *page) { - dma_addr_t ret = page->dma_addr; + dma_addr_t ret; + + if (page_is_page_pool_iov(page)) + return page_pool_iov_dma_addr(page_to_page_pool_iov(page)); + + ret = page->dma_addr; if (PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA) ret <<= PAGE_SHIFT; @@ -441,6 +507,12 @@ static inline dma_addr_t page_pool_get_dma_addr(struct page *page) static inline bool page_pool_set_dma_addr(struct page *page, dma_addr_t addr) { + /* page_pool_iovs are mapped and their dma-addr can't be modified. */ + if (page_is_page_pool_iov(page)) { + DEBUG_NET_WARN_ON_ONCE(true); + return false; + } + if (PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA) { page->dma_addr = addr >> PAGE_SHIFT; diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 138ddea0b28f..d211996d423b 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -317,7 +317,7 @@ static struct page *page_pool_refill_alloc_cache(struct page_pool *pool) if (unlikely(!page)) break; - if (likely(page_to_nid(page) == pref_nid)) { + if (likely(page_pool_page_is_pref_nid(page, pref_nid))) { pool->alloc.cache[pool->alloc.count++] = page; } else { /* NUMA mismatch; @@ -362,7 +362,15 @@ static void page_pool_dma_sync_for_device(struct page_pool *pool, struct page *page, unsigned int dma_sync_size) { - dma_addr_t dma_addr = page_pool_get_dma_addr(page); + dma_addr_t dma_addr; + + /* page_pool_iov memory provider do not support PP_FLAG_DMA_SYNC_DEV */ + if (page_is_page_pool_iov(page)) { + DEBUG_NET_WARN_ON_ONCE(true); + return; + } + + dma_addr = page_pool_get_dma_addr(page); dma_sync_size = min(dma_sync_size, pool->p.max_len); dma_sync_single_range_for_device(pool->p.dev, dma_addr, @@ -374,6 +382,12 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page) { dma_addr_t dma; + if (page_is_page_pool_iov(page)) { + /* page_pool_iovs are already mapped */ + DEBUG_NET_WARN_ON_ONCE(true); + return true; + } + /* Setup DMA mapping: use 'struct page' area for storing DMA-addr * since dma_addr_t can be either 32 or 64 bits and does not always fit * into page private data (i.e 32bit cpu with 64bit DMA caps) @@ -405,22 +419,33 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page) static void page_pool_set_pp_info(struct page_pool *pool, struct page *page) { - page->pp = pool; - page->pp_magic |= PP_SIGNATURE; - - /* Ensuring all pages have been split into one fragment initially: - * page_pool_set_pp_info() is only called once for every page when it - * is allocated from the page allocator and page_pool_fragment_page() - * is dirtying the same cache line as the page->pp_magic above, so - * the overhead is negligible. - */ - page_pool_fragment_page(page, 1); + if (!page_is_page_pool_iov(page)) { + page->pp = pool; + page->pp_magic |= PP_SIGNATURE; + + /* Ensuring all pages have been split into one fragment + * initially: + * page_pool_set_pp_info() is only called once for every page + * when it is allocated from the page allocator and + * page_pool_fragment_page() is dirtying the same cache line as + * the page->pp_magic above, so * the overhead is negligible. + */ + page_pool_fragment_page(page, 1); + } else { + page_to_page_pool_iov(page)->pp = pool; + } + if (pool->p.init_callback) pool->p.init_callback(page, pool->p.init_arg); } static void page_pool_clear_pp_info(struct page *page) { + if (page_is_page_pool_iov(page)) { + page_to_page_pool_iov(page)->pp = NULL; + return; + } + page->pp_magic = 0; page->pp = NULL; } @@ -630,7 +655,7 @@ static bool page_pool_recycle_in_cache(struct page *page, return false; } - /* Caller MUST have verified/know (page_ref_count(page) == 1) */ + /* Caller MUST have verified/know (page_pool_page_ref_count(page) == 1) */ pool->alloc.cache[pool->alloc.count++] = page; recycle_stat_inc(pool, cached); return true; @@ -655,9 +680,10 @@ __page_pool_put_page(struct page_pool *pool, struct page *page, * refcnt == 1 means page_pool owns page, and can recycle it. * * page is NOT reusable when allocated when system is under - * some pressure. (page_is_pfmemalloc) + * some pressure. (page_pool_page_is_pfmemalloc) */ - if (likely(page_ref_count(page) == 1 && !page_is_pfmemalloc(page))) { + if (likely(page_pool_page_ref_count(page) == 1 && + !page_pool_page_is_pfmemalloc(page))) { /* Read barrier done in page_ref_count / READ_ONCE */ if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) @@ -772,7 +798,8 @@ static struct page *page_pool_drain_frag(struct page_pool *pool, if (likely(page_pool_defrag_page(page, drain_count))) return NULL; - if (page_ref_count(page) == 1 && !page_is_pfmemalloc(page)) { + if (page_pool_page_ref_count(page) == 1 && + !page_pool_page_is_pfmemalloc(page)) { if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) page_pool_dma_sync_for_device(pool, page, -1); @@ -848,9 +875,9 @@ static void page_pool_empty_ring(struct page_pool *pool) /* Empty recycle ring */ while ((page = ptr_ring_consume_bh(&pool->ring))) { /* Verify the refcnt invariant of cached pages */ - if (!(page_ref_count(page) == 1)) + if (!(page_pool_page_ref_count(page) == 1)) pr_crit("%s() page_pool refcnt %d violation\n", - __func__, page_ref_count(page)); + __func__, page_pool_page_ref_count(page)); page_pool_return_page(pool, page); } From patchwork Mon Nov 6 02:44:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13446237 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54A91C4167B for ; Mon, 6 Nov 2023 02:45:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230231AbjKFCpT (ORCPT ); Sun, 5 Nov 2023 21:45:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41380 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230346AbjKFCo4 (ORCPT ); Sun, 5 Nov 2023 21:44:56 -0500 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 37A2ED7A for ; Sun, 5 Nov 2023 18:44:37 -0800 (PST) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5b0e9c78309so53948447b3.1 for ; Sun, 05 Nov 2023 18:44:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699238676; x=1699843476; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=bbXy+8EI8/reBuw9EZooMZJzNkS3vxkh8+4Xq7ifx6w=; b=QGXMY136SJv8X/CLUqR1CFMrKd0RapdsEmr1qELFYNhEOgXjCagvWEVmBcmV7ofhrR KHCu3F+bneF1FArMXc0ACK4cwILfBYi0KP0SWDvQhOuQc3mcsxdASuD3Sq20PRDTQ2dR ByZLXG6Cubc25v9QZC0LVUV7xQKT81kml329mEvib+zbkNSpKA23zdIafExJiVFwV9uB GqG+hLcxdipbKBRfcQr82h2ep2/HsiV/oB0C/4l6KH9XSf1nxiA4ep4kGEd6P4cM9xYx yEAdXko1q+gfSSZpKJr0PWSgIhEewWbB+zsy5iBoMUANzILRzEGyVtaOweVlUcpy96lC qWlw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699238676; x=1699843476; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bbXy+8EI8/reBuw9EZooMZJzNkS3vxkh8+4Xq7ifx6w=; b=bM4FEIhvsM0fP/g4e1z7be91+J94NeDk9pexA/Ju/T31+Mutmrsihf4TRhhQHo9a7M mgN9Ox37L2ba+wlBs4EHAV6MzDYy74HZHCPxdJqj7yqtfe2VMt20mLM1PGzyTKhe569b TuH26CyafQO4tREqGrH4igxr0OuGi2CXxeoHS5dYnGjYY2lf3M+bcSFWW26EAkF1BLAI YmxiPUBBHSN+63shtdUsqwdRrjnG6wXe9nLt4SYDeXkKmgdmr1839MDLZTf8ZLoqTmnm K68ocsL/8pk0NDUUp9hc31XzHwqol6AT56Bvfsy1uQo8+heY15husHUO4IN/EgVs2Vrr 8UgQ== X-Gm-Message-State: AOJu0Yxmy+xKXz4AxlIfmAf5ZrWh6Kaz1dPgNF8R9gYpNZLTk1DIJ1DQ 2TIjxP+D/6Z5gwDzSJSQaU/fpIEhQaI61h2t5g== X-Google-Smtp-Source: AGHT+IGWdWv+ANGYqQyzA2UyXJhi775o4ifiKnH6NXK6tcVAsYvHc+4YYt6Jkb55wfo8xTcaVhvPEUirYwS2JNQH1Q== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:35de:fff:97b7:db3e]) (user=almasrymina job=sendgmr) by 2002:a81:92ce:0:b0:59e:ee51:52a1 with SMTP id j197-20020a8192ce000000b0059eee5152a1mr195351ywg.10.1699238675828; Sun, 05 Nov 2023 18:44:35 -0800 (PST) Date: Sun, 5 Nov 2023 18:44:07 -0800 In-Reply-To: <20231106024413.2801438-1-almasrymina@google.com> Mime-Version: 1.0 References: <20231106024413.2801438-1-almasrymina@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231106024413.2801438-9-almasrymina@google.com> Subject: [RFC PATCH v3 08/12] net: support non paged skb frags From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Cc: Mina Almasry , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jesper Dangaard Brouer , Ilias Apalodimas , Arnd Bergmann , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Make skb_frag_page() fail in the case where the frag is not backed by a page, and fix its relevent callers to handle this case. Correctly handle skb_frag refcounting in the page_pool_iovs case. Signed-off-by: Mina Almasry --- include/linux/skbuff.h | 42 +++++++++++++++++++++++++++++++++++------- net/core/gro.c | 2 +- net/core/skbuff.c | 3 +++ net/ipv4/tcp.c | 10 +++++++++- 4 files changed, 48 insertions(+), 9 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 97bfef071255..1fae276c1353 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -37,6 +37,8 @@ #endif #include #include +#include +#include /** * DOC: skb checksums @@ -3402,15 +3404,38 @@ static inline void skb_frag_off_copy(skb_frag_t *fragto, fragto->bv_offset = fragfrom->bv_offset; } +/* Returns true if the skb_frag contains a page_pool_iov. */ +static inline bool skb_frag_is_page_pool_iov(const skb_frag_t *frag) +{ + return page_is_page_pool_iov(frag->bv_page); +} + /** * skb_frag_page - retrieve the page referred to by a paged fragment * @frag: the paged fragment * - * Returns the &struct page associated with @frag. + * Returns the &struct page associated with @frag. Returns NULL if this frag + * has no associated page. */ static inline struct page *skb_frag_page(const skb_frag_t *frag) { - return frag->bv_page; + if (!page_is_page_pool_iov(frag->bv_page)) + return frag->bv_page; + + return NULL; +} + +/** + * skb_frag_page_pool_iov - retrieve the page_pool_iov referred to by fragment + * @frag: the fragment + * + * Returns the &struct page_pool_iov associated with @frag. Returns NULL if this + * frag has no associated page_pool_iov. + */ +static inline struct page_pool_iov * +skb_frag_page_pool_iov(const skb_frag_t *frag) +{ + return page_to_page_pool_iov(frag->bv_page); } /** @@ -3421,7 +3446,7 @@ static inline struct page *skb_frag_page(const skb_frag_t *frag) */ static inline void __skb_frag_ref(skb_frag_t *frag) { - get_page(skb_frag_page(frag)); + page_pool_page_get_many(frag->bv_page, 1); } /** @@ -3441,13 +3466,13 @@ bool napi_pp_put_page(struct page *page, bool napi_safe); static inline void napi_frag_unref(skb_frag_t *frag, bool recycle, bool napi_safe) { - struct page *page = skb_frag_page(frag); - #ifdef CONFIG_PAGE_POOL - if (recycle && napi_pp_put_page(page, napi_safe)) + if (recycle && napi_pp_put_page(frag->bv_page, napi_safe)) return; + page_pool_page_put_many(frag->bv_page, 1); +#else + put_page(skb_frag_page(frag)); #endif - put_page(page); } /** @@ -3487,6 +3512,9 @@ static inline void skb_frag_unref(struct sk_buff *skb, int f) */ static inline void *skb_frag_address(const skb_frag_t *frag) { + if (!skb_frag_page(frag)) + return NULL; + return page_address(skb_frag_page(frag)) + skb_frag_off(frag); } diff --git a/net/core/gro.c b/net/core/gro.c index 0759277dc14e..42d7f6755f32 100644 --- a/net/core/gro.c +++ b/net/core/gro.c @@ -376,7 +376,7 @@ static inline void skb_gro_reset_offset(struct sk_buff *skb, u32 nhoff) NAPI_GRO_CB(skb)->frag0 = NULL; NAPI_GRO_CB(skb)->frag0_len = 0; - if (!skb_headlen(skb) && pinfo->nr_frags && + if (!skb_headlen(skb) && pinfo->nr_frags && skb_frag_page(frag0) && !PageHighMem(skb_frag_page(frag0)) && (!NET_IP_ALIGN || !((skb_frag_off(frag0) + nhoff) & 3))) { NAPI_GRO_CB(skb)->frag0 = skb_frag_address(frag0); diff --git a/net/core/skbuff.c b/net/core/skbuff.c index c52ddd6891d9..13eca4fd25e1 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -2994,6 +2994,9 @@ static bool __skb_splice_bits(struct sk_buff *skb, struct pipe_inode_info *pipe, for (seg = 0; seg < skb_shinfo(skb)->nr_frags; seg++) { const skb_frag_t *f = &skb_shinfo(skb)->frags[seg]; + if (WARN_ON_ONCE(!skb_frag_page(f))) + return false; + if (__splice_segment(skb_frag_page(f), skb_frag_off(f), skb_frag_size(f), offset, len, spd, false, sk, pipe)) diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index a86d8200a1e8..23b29dc49271 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -2155,6 +2155,9 @@ static int tcp_zerocopy_receive(struct sock *sk, break; } page = skb_frag_page(frags); + if (WARN_ON_ONCE(!page)) + break; + prefetchw(page); pages[pages_to_map++] = page; length += PAGE_SIZE; @@ -4411,7 +4414,12 @@ int tcp_md5_hash_skb_data(struct tcp_md5sig_pool *hp, for (i = 0; i < shi->nr_frags; ++i) { const skb_frag_t *f = &shi->frags[i]; unsigned int offset = skb_frag_off(f); - struct page *page = skb_frag_page(f) + (offset >> PAGE_SHIFT); + struct page *page = skb_frag_page(f); + + if (WARN_ON_ONCE(!page)) + return 1; + + page += offset >> PAGE_SHIFT; sg_set_page(&sg, page, skb_frag_size(f), offset_in_page(offset)); From patchwork Mon Nov 6 02:44:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13446238 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EBC97C4167D for ; Mon, 6 Nov 2023 02:45:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230385AbjKFCp2 (ORCPT ); Sun, 5 Nov 2023 21:45:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41994 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230372AbjKFCpI (ORCPT ); Sun, 5 Nov 2023 21:45:08 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0CCBA10D4 for ; Sun, 5 Nov 2023 18:44:39 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5a7cc433782so44608097b3.3 for ; Sun, 05 Nov 2023 18:44:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699238678; x=1699843478; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=usFyJYQLl60eSImLOMHMcBI1gyuI+rh6zXVKki6RWDo=; b=Sw2Qxo6hljiPC+R/aCGYaxTwhtItTCveRcTmoasNYc7C7x6QeFiSh4R5eitD5oc1gb 4eUcOhMUOsA28IVds/lpf0wz2TDqIIHsMIpuUviKqWoXEJmhsBQXiPHfbMn+Qc6l3Yx5 eL/QQK7UQk2HIrfuBD6geyfbNaAOjnokdXgx7iTP4ek5yEtnQfBDLal6xtGBnL7MdAzI HX7cMoA6puYwu8hM3KPxBa/Fc0VrzmEYPgyn0AnJhhHe54IrvJzcf/dNWbd1TXNKayHp mJ8DONV1yZ07W4sqTOjPt15TGtvxtMz4VEoHaG7zAVDyi8IvpAH6+rA7V08frAq1owVx Rl5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699238678; x=1699843478; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=usFyJYQLl60eSImLOMHMcBI1gyuI+rh6zXVKki6RWDo=; b=igZF16szS4L9Hz6kydIDvl/qYZvYB9EtebpUkxieVOE3mzMxDW3OOeOF2QpFjLszjK WEHa2MM6PbAwgVryax65x5v3Yi4L1s9aZMvPSOz1C8+/DuzT8fHge5EJFq7Ei04Euc8m anVpNLs43Dah89v4gdT1NoAyr3QliYw3DqdMO6nuzJv9PsL9+zIs153WgfMrNKvZDrwd otLU0UwupOdd94/iHqmMtNDBm4Yipr5S6vQqQoP8GAOLuW5dbazRlttrMkwJnTCbWERr 9N0/BR6Y5mgRhIIOIDvLR6sck+K4IV18pIDoptidwJrebqmXULVGfNWS5lMlDw3NVCXS 4HSg== X-Gm-Message-State: AOJu0YyQ8mL/8aHFDNKWATCcWeQ+wnTZlF6/hyj9eM4fRWp/UejLqieM 3V5mJumyI9XCgJD1gdNb7ph12N/0EbKeCmGzRw== X-Google-Smtp-Source: AGHT+IGFzaww7K6GTVlkYabYZVQ/DHd33JCFtvf+myhctclpL+tEkg4setZAgR1l/9MfPCl9O10WIL+ZUo9GXmVivA== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:35de:fff:97b7:db3e]) (user=almasrymina job=sendgmr) by 2002:a81:5215:0:b0:5a7:acc1:5142 with SMTP id g21-20020a815215000000b005a7acc15142mr176056ywb.8.1699238677822; Sun, 05 Nov 2023 18:44:37 -0800 (PST) Date: Sun, 5 Nov 2023 18:44:08 -0800 In-Reply-To: <20231106024413.2801438-1-almasrymina@google.com> Mime-Version: 1.0 References: <20231106024413.2801438-1-almasrymina@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231106024413.2801438-10-almasrymina@google.com> Subject: [RFC PATCH v3 09/12] net: add support for skbs with unreadable frags From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Cc: Mina Almasry , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jesper Dangaard Brouer , Ilias Apalodimas , Arnd Bergmann , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi , Willem de Bruijn , Kaiyuan Zhang Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org For device memory TCP, we expect the skb headers to be available in host memory for access, and we expect the skb frags to be in device memory and unaccessible to the host. We expect there to be no mixing and matching of device memory frags (unaccessible) with host memory frags (accessible) in the same skb. Add a skb->devmem flag which indicates whether the frags in this skb are device memory frags or not. __skb_fill_page_desc() now checks frags added to skbs for page_pool_iovs, and marks the skb as skb->devmem accordingly. Add checks through the network stack to avoid accessing the frags of devmem skbs and avoid coalescing devmem skbs with non devmem skbs. Signed-off-by: Willem de Bruijn Signed-off-by: Kaiyuan Zhang Signed-off-by: Mina Almasry --- include/linux/skbuff.h | 14 +++++++- include/net/tcp.h | 5 +-- net/core/datagram.c | 6 ++++ net/core/gro.c | 5 ++- net/core/skbuff.c | 77 ++++++++++++++++++++++++++++++++++++------ net/ipv4/tcp.c | 6 ++++ net/ipv4/tcp_input.c | 13 +++++-- net/ipv4/tcp_output.c | 5 ++- net/packet/af_packet.c | 4 +-- 9 files changed, 115 insertions(+), 20 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 1fae276c1353..8fb468ff8115 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -805,6 +805,8 @@ typedef unsigned char *sk_buff_data_t; * @csum_level: indicates the number of consecutive checksums found in * the packet minus one that have been verified as * CHECKSUM_UNNECESSARY (max 3) + * @devmem: indicates that all the fragments in this skb are backed by + * device memory. * @dst_pending_confirm: need to confirm neighbour * @decrypted: Decrypted SKB * @slow_gro: state present at GRO time, slower prepare step required @@ -991,7 +993,7 @@ struct sk_buff { #if IS_ENABLED(CONFIG_IP_SCTP) __u8 csum_not_inet:1; #endif - + __u8 devmem:1; #if defined(CONFIG_NET_SCHED) || defined(CONFIG_NET_XGRESS) __u16 tc_index; /* traffic control index */ #endif @@ -1766,6 +1768,12 @@ static inline void skb_zcopy_downgrade_managed(struct sk_buff *skb) __skb_zcopy_downgrade_managed(skb); } +/* Return true if frags in this skb are not readable by the host. */ +static inline bool skb_frags_not_readable(const struct sk_buff *skb) +{ + return skb->devmem; +} + static inline void skb_mark_not_on_list(struct sk_buff *skb) { skb->next = NULL; @@ -2468,6 +2476,10 @@ static inline void __skb_fill_page_desc(struct sk_buff *skb, int i, struct page *page, int off, int size) { __skb_fill_page_desc_noacc(skb_shinfo(skb), i, page, off, size); + if (page_is_page_pool_iov(page)) { + skb->devmem = true; + return; + } /* Propagate page pfmemalloc to the skb if we can. The problem is * that not all callers have unique ownership of the page but rely diff --git a/include/net/tcp.h b/include/net/tcp.h index 39b731c900dd..1ae62d1e284b 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1012,7 +1012,7 @@ static inline int tcp_skb_mss(const struct sk_buff *skb) static inline bool tcp_skb_can_collapse_to(const struct sk_buff *skb) { - return likely(!TCP_SKB_CB(skb)->eor); + return likely(!TCP_SKB_CB(skb)->eor && !skb_frags_not_readable(skb)); } static inline bool tcp_skb_can_collapse(const struct sk_buff *to, @@ -1020,7 +1020,8 @@ static inline bool tcp_skb_can_collapse(const struct sk_buff *to, { return likely(tcp_skb_can_collapse_to(to) && mptcp_skb_can_collapse(to, from) && - skb_pure_zcopy_same(to, from)); + skb_pure_zcopy_same(to, from) && + skb_frags_not_readable(to) == skb_frags_not_readable(from)); } /* Events passed to congestion control interface */ diff --git a/net/core/datagram.c b/net/core/datagram.c index 176eb5834746..cdd4fb129968 100644 --- a/net/core/datagram.c +++ b/net/core/datagram.c @@ -425,6 +425,9 @@ static int __skb_datagram_iter(const struct sk_buff *skb, int offset, return 0; } + if (skb_frags_not_readable(skb)) + goto short_copy; + /* Copy paged appendix. Hmm... why does this look so complicated? */ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { int end; @@ -616,6 +619,9 @@ int __zerocopy_sg_from_iter(struct msghdr *msg, struct sock *sk, { int frag; + if (skb_frags_not_readable(skb)) + return -EFAULT; + if (msg && msg->msg_ubuf && msg->sg_from_iter) return msg->sg_from_iter(sk, skb, from, length); diff --git a/net/core/gro.c b/net/core/gro.c index 42d7f6755f32..56046d65386a 100644 --- a/net/core/gro.c +++ b/net/core/gro.c @@ -390,6 +390,9 @@ static void gro_pull_from_frag0(struct sk_buff *skb, int grow) { struct skb_shared_info *pinfo = skb_shinfo(skb); + if (WARN_ON_ONCE(skb_frags_not_readable(skb))) + return; + BUG_ON(skb->end - skb->tail < grow); memcpy(skb_tail_pointer(skb), NAPI_GRO_CB(skb)->frag0, grow); @@ -411,7 +414,7 @@ static void gro_try_pull_from_frag0(struct sk_buff *skb) { int grow = skb_gro_offset(skb) - skb_headlen(skb); - if (grow > 0) + if (grow > 0 && !skb_frags_not_readable(skb)) gro_pull_from_frag0(skb, grow); } diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 13eca4fd25e1..f01673ed2eff 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -1230,6 +1230,14 @@ void skb_dump(const char *level, const struct sk_buff *skb, bool full_pkt) struct page *p; u8 *vaddr; + if (skb_frag_is_page_pool_iov(frag)) { + printk("%sskb frag %d: not readable\n", level, i); + len -= frag->bv_len; + if (!len) + break; + continue; + } + skb_frag_foreach_page(frag, skb_frag_off(frag), skb_frag_size(frag), p, p_off, p_len, copied) { @@ -1807,6 +1815,9 @@ int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_mask) if (skb_shared(skb) || skb_unclone(skb, gfp_mask)) return -EINVAL; + if (skb_frags_not_readable(skb)) + return -EFAULT; + if (!num_frags) goto release; @@ -1977,8 +1988,12 @@ struct sk_buff *skb_copy(const struct sk_buff *skb, gfp_t gfp_mask) { int headerlen = skb_headroom(skb); unsigned int size = skb_end_offset(skb) + skb->data_len; - struct sk_buff *n = __alloc_skb(size, gfp_mask, - skb_alloc_rx_flag(skb), NUMA_NO_NODE); + struct sk_buff *n; + + if (skb_frags_not_readable(skb)) + return NULL; + + n = __alloc_skb(size, gfp_mask, skb_alloc_rx_flag(skb), NUMA_NO_NODE); if (!n) return NULL; @@ -2304,14 +2319,16 @@ struct sk_buff *skb_copy_expand(const struct sk_buff *skb, int newheadroom, int newtailroom, gfp_t gfp_mask) { - /* - * Allocate the copy buffer - */ - struct sk_buff *n = __alloc_skb(newheadroom + skb->len + newtailroom, - gfp_mask, skb_alloc_rx_flag(skb), - NUMA_NO_NODE); int oldheadroom = skb_headroom(skb); int head_copy_len, head_copy_off; + struct sk_buff *n; + + if (skb_frags_not_readable(skb)) + return NULL; + + /* Allocate the copy buffer */ + n = __alloc_skb(newheadroom + skb->len + newtailroom, gfp_mask, + skb_alloc_rx_flag(skb), NUMA_NO_NODE); if (!n) return NULL; @@ -2650,6 +2667,9 @@ void *__pskb_pull_tail(struct sk_buff *skb, int delta) */ int i, k, eat = (skb->tail + delta) - skb->end; + if (skb_frags_not_readable(skb)) + return NULL; + if (eat > 0 || skb_cloned(skb)) { if (pskb_expand_head(skb, 0, eat > 0 ? eat + 128 : 0, GFP_ATOMIC)) @@ -2803,6 +2823,9 @@ int skb_copy_bits(const struct sk_buff *skb, int offset, void *to, int len) to += copy; } + if (skb_frags_not_readable(skb)) + goto fault; + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { int end; skb_frag_t *f = &skb_shinfo(skb)->frags[i]; @@ -2991,6 +3014,9 @@ static bool __skb_splice_bits(struct sk_buff *skb, struct pipe_inode_info *pipe, /* * then map the fragments */ + if (skb_frags_not_readable(skb)) + return false; + for (seg = 0; seg < skb_shinfo(skb)->nr_frags; seg++) { const skb_frag_t *f = &skb_shinfo(skb)->frags[seg]; @@ -3214,6 +3240,9 @@ int skb_store_bits(struct sk_buff *skb, int offset, const void *from, int len) from += copy; } + if (skb_frags_not_readable(skb)) + goto fault; + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; int end; @@ -3293,6 +3322,9 @@ __wsum __skb_checksum(const struct sk_buff *skb, int offset, int len, pos = copy; } + if (skb_frags_not_readable(skb)) + return 0; + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { int end; skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; @@ -3393,6 +3425,9 @@ __wsum skb_copy_and_csum_bits(const struct sk_buff *skb, int offset, pos = copy; } + if (skb_frags_not_readable(skb)) + return 0; + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { int end; @@ -3883,7 +3918,9 @@ static inline void skb_split_inside_header(struct sk_buff *skb, skb_shinfo(skb1)->frags[i] = skb_shinfo(skb)->frags[i]; skb_shinfo(skb1)->nr_frags = skb_shinfo(skb)->nr_frags; + skb1->devmem = skb->devmem; skb_shinfo(skb)->nr_frags = 0; + skb->devmem = 0; skb1->data_len = skb->data_len; skb1->len += skb1->data_len; skb->data_len = 0; @@ -3897,6 +3934,7 @@ static inline void skb_split_no_header(struct sk_buff *skb, { int i, k = 0; const int nfrags = skb_shinfo(skb)->nr_frags; + const int devmem = skb->devmem; skb_shinfo(skb)->nr_frags = 0; skb1->len = skb1->data_len = skb->len - len; @@ -3930,6 +3968,16 @@ static inline void skb_split_no_header(struct sk_buff *skb, pos += size; } skb_shinfo(skb1)->nr_frags = k; + + if (skb_shinfo(skb)->nr_frags) + skb->devmem = devmem; + else + skb->devmem = 0; + + if (skb_shinfo(skb1)->nr_frags) + skb1->devmem = devmem; + else + skb1->devmem = 0; } /** @@ -4165,6 +4213,9 @@ unsigned int skb_seq_read(unsigned int consumed, const u8 **data, return block_limit - abs_offset; } + if (skb_frags_not_readable(st->cur_skb)) + return 0; + if (st->frag_idx == 0 && !st->frag_data) st->stepped_offset += skb_headlen(st->cur_skb); @@ -5779,7 +5830,10 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from, (from->pp_recycle && skb_cloned(from))) return false; - if (len <= skb_tailroom(to)) { + if (skb_frags_not_readable(from) != skb_frags_not_readable(to)) + return false; + + if (len <= skb_tailroom(to) && !skb_frags_not_readable(from)) { if (len) BUG_ON(skb_copy_bits(from, 0, skb_put(to, len), len)); *delta_truesize = 0; @@ -5954,6 +6008,9 @@ int skb_ensure_writable(struct sk_buff *skb, unsigned int write_len) if (!pskb_may_pull(skb, write_len)) return -ENOMEM; + if (skb_frags_not_readable(skb)) + return -EFAULT; + if (!skb_cloned(skb) || skb_clone_writable(skb, write_len)) return 0; @@ -6608,7 +6665,7 @@ void skb_condense(struct sk_buff *skb) { if (skb->data_len) { if (skb->data_len > skb->end - skb->tail || - skb_cloned(skb)) + skb_cloned(skb) || skb_frags_not_readable(skb)) return; /* Nice, we can free page frag(s) right now */ diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 23b29dc49271..5c6fed52ed0e 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -2138,6 +2138,9 @@ static int tcp_zerocopy_receive(struct sock *sk, skb = tcp_recv_skb(sk, seq, &offset); } + if (skb_frags_not_readable(skb)) + break; + if (TCP_SKB_CB(skb)->has_rxtstamp) { tcp_update_recv_tstamps(skb, tss); zc->msg_flags |= TCP_CMSG_TS; @@ -4411,6 +4414,9 @@ int tcp_md5_hash_skb_data(struct tcp_md5sig_pool *hp, if (crypto_ahash_update(req)) return 1; + if (skb_frags_not_readable(skb)) + return 1; + for (i = 0; i < shi->nr_frags; ++i) { const skb_frag_t *f = &shi->frags[i]; unsigned int offset = skb_frag_off(f); diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 18b858597af4..64643dad5e1a 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -5264,6 +5264,9 @@ tcp_collapse(struct sock *sk, struct sk_buff_head *list, struct rb_root *root, for (end_of_skbs = true; skb != NULL && skb != tail; skb = n) { n = tcp_skb_next(skb, list); + if (skb_frags_not_readable(skb)) + goto skip_this; + /* No new bits? It is possible on ofo queue. */ if (!before(start, TCP_SKB_CB(skb)->end_seq)) { skb = tcp_collapse_one(sk, skb, list, root); @@ -5284,17 +5287,20 @@ tcp_collapse(struct sock *sk, struct sk_buff_head *list, struct rb_root *root, break; } - if (n && n != tail && mptcp_skb_can_collapse(skb, n) && + if (n && n != tail && !skb_frags_not_readable(n) && + mptcp_skb_can_collapse(skb, n) && TCP_SKB_CB(skb)->end_seq != TCP_SKB_CB(n)->seq) { end_of_skbs = false; break; } +skip_this: /* Decided to skip this, advance start seq. */ start = TCP_SKB_CB(skb)->end_seq; } if (end_of_skbs || - (TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN))) + (TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN)) || + skb_frags_not_readable(skb)) return; __skb_queue_head_init(&tmp); @@ -5338,7 +5344,8 @@ tcp_collapse(struct sock *sk, struct sk_buff_head *list, struct rb_root *root, if (!skb || skb == tail || !mptcp_skb_can_collapse(nskb, skb) || - (TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN))) + (TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN)) || + skb_frags_not_readable(skb)) goto end; #ifdef CONFIG_TLS_DEVICE if (skb->decrypted != nskb->decrypted) diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 2866ccbccde0..60df27f6c649 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -2309,7 +2309,8 @@ static bool tcp_can_coalesce_send_queue_head(struct sock *sk, int len) if (unlikely(TCP_SKB_CB(skb)->eor) || tcp_has_tx_tstamp(skb) || - !skb_pure_zcopy_same(skb, next)) + !skb_pure_zcopy_same(skb, next) || + skb_frags_not_readable(skb) != skb_frags_not_readable(next)) return false; len -= skb->len; @@ -3193,6 +3194,8 @@ static bool tcp_can_collapse(const struct sock *sk, const struct sk_buff *skb) return false; if (skb_cloned(skb)) return false; + if (skb_frags_not_readable(skb)) + return false; /* Some heuristics for collapsing over SACK'd could be invented */ if (TCP_SKB_CB(skb)->sacked & TCPCB_SACKED_ACKED) return false; diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c index a84e00b5904b..8f6cca683939 100644 --- a/net/packet/af_packet.c +++ b/net/packet/af_packet.c @@ -2156,7 +2156,7 @@ static int packet_rcv(struct sk_buff *skb, struct net_device *dev, } } - snaplen = skb->len; + snaplen = skb_frags_not_readable(skb) ? skb_headlen(skb) : skb->len; res = run_filter(skb, sk, snaplen); if (!res) @@ -2279,7 +2279,7 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev, } } - snaplen = skb->len; + snaplen = skb_frags_not_readable(skb) ? skb_headlen(skb) : skb->len; res = run_filter(skb, sk, snaplen); if (!res) From patchwork Mon Nov 6 02:44:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13446239 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FB07C41535 for ; Mon, 6 Nov 2023 02:45:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230333AbjKFCpf (ORCPT ); Sun, 5 Nov 2023 21:45:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41478 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230409AbjKFCpR (ORCPT ); Sun, 5 Nov 2023 21:45:17 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 460E610E6 for ; Sun, 5 Nov 2023 18:44:40 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-da0cb98f66cso4614861276.2 for ; Sun, 05 Nov 2023 18:44:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699238680; x=1699843480; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=1z+RrI0uMm89Nr15H+saN0l/MmritlO+84MwMXJPNRo=; b=ezmgPfjpM6v2Kj5HuotNhh8vDzIdDVw9sDLtQa4XoA+GIUFCSJOobFH9lcWKJAOpvM dmB4upk7JFQteHqAMIb3eebCnsTYY5t7NRZ1WrQvrtw2OD1qBQMVVkI8MaLcHLZxxCIQ r+rr+bFABFqzHX+6HcFKed80RSei40842vz4blmRwmxrPTo3kZwNrIbT2Lem1Zp+k3Na 27UxhzFHWlrS0hzQAmFVaUKjSx/gPXrEve6LBHaXOD9qqfcqOaVcqURZMnVZeo+x+e/h ipzOYBUALbGsIS/tt8ttwbqcDfFjlAlGy2xsTk3Boe737ECaCTc4iNgPrsdjp13d7add 6fHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699238680; x=1699843480; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=1z+RrI0uMm89Nr15H+saN0l/MmritlO+84MwMXJPNRo=; b=ELiSdtnHwLRECoityEkK/YeoojcYXqFq9sstBtJWH5JaGjRZ5F+4e8+HBvszrOPqsh RFGtjRDNPiIEWw74vdm/zrjsp6vMKUSVDPKfOyzM/fGQCj8tPt1dlau6E8BYgJR7Sp+v sTRl+Hx9Ef6vCmh0009VPAFfihKwxfaqC+zA0pcZlUiFD9eQxC72eqRm5cLdrVUHQmi1 wXiMyxoLJt5ml99CBh/AEmUsbpo0CUvEVUJWoxvAGUibVBrp+VHFN3n22aQyWhiku0qB Abg9w7V9ii26ubx9DGv2bRlEc8180kliXlutsuUkkSYzcJs2CP/iEze8JDORnZ0kILFg qcvQ== X-Gm-Message-State: AOJu0Yw+2askBTPyc/jnRkCf4JiuKEnSl4VDU4uB0XTWUeRluNWWHuxx lrVqGZoYevXeTirqyXDwCDEQ9/kTsvUFBedpsg== X-Google-Smtp-Source: AGHT+IHYvV3FnxqZNv9DZD2EwWhxpeGHUrzGBX9GcPxGYqzKiXM4EzwPj51HuZbQ3Xrr4pQ7WbaDuuYiCFHgEcsWXg== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:35de:fff:97b7:db3e]) (user=almasrymina job=sendgmr) by 2002:a25:ca44:0:b0:d9a:e3d9:99bd with SMTP id a65-20020a25ca44000000b00d9ae3d999bdmr506979ybg.5.1699238679957; Sun, 05 Nov 2023 18:44:39 -0800 (PST) Date: Sun, 5 Nov 2023 18:44:09 -0800 In-Reply-To: <20231106024413.2801438-1-almasrymina@google.com> Mime-Version: 1.0 References: <20231106024413.2801438-1-almasrymina@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231106024413.2801438-11-almasrymina@google.com> Subject: [RFC PATCH v3 10/12] tcp: RX path for devmem TCP From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Cc: Mina Almasry , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jesper Dangaard Brouer , Ilias Apalodimas , Arnd Bergmann , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi , Willem de Bruijn , Kaiyuan Zhang Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org In tcp_recvmsg_locked(), detect if the skb being received by the user is a devmem skb. In this case - if the user provided the MSG_SOCK_DEVMEM flag - pass it to tcp_recvmsg_devmem() for custom handling. tcp_recvmsg_devmem() copies any data in the skb header to the linear buffer, and returns a cmsg to the user indicating the number of bytes returned in the linear buffer. tcp_recvmsg_devmem() then loops over the unaccessible devmem skb frags, and returns to the user a cmsg_devmem indicating the location of the data in the dmabuf device memory. cmsg_devmem contains this information: 1. the offset into the dmabuf where the payload starts. 'frag_offset'. 2. the size of the frag. 'frag_size'. 3. an opaque token 'frag_token' to return to the kernel when the buffer is to be released. The pages awaiting freeing are stored in the newly added sk->sk_user_pages, and each page passed to userspace is get_page()'d. This reference is dropped once the userspace indicates that it is done reading this page. All pages are released when the socket is destroyed. Signed-off-by: Willem de Bruijn Signed-off-by: Kaiyuan Zhang Signed-off-by: Mina Almasry --- RFC v3: - Fixed issue with put_cmsg() failing silently. --- include/linux/socket.h | 1 + include/net/page_pool/helpers.h | 9 ++ include/net/sock.h | 2 + include/uapi/asm-generic/socket.h | 5 + include/uapi/linux/uio.h | 6 + net/ipv4/tcp.c | 189 +++++++++++++++++++++++++++++- net/ipv4/tcp_ipv4.c | 7 ++ 7 files changed, 214 insertions(+), 5 deletions(-) diff --git a/include/linux/socket.h b/include/linux/socket.h index cfcb7e2c3813..fe2b9e2081bb 100644 --- a/include/linux/socket.h +++ b/include/linux/socket.h @@ -326,6 +326,7 @@ struct ucred { * plain text and require encryption */ +#define MSG_SOCK_DEVMEM 0x2000000 /* Receive devmem skbs as cmsg */ #define MSG_ZEROCOPY 0x4000000 /* Use user data in kernel path */ #define MSG_SPLICE_PAGES 0x8000000 /* Splice the pages from the iterator in sendmsg() */ #define MSG_FASTOPEN 0x20000000 /* Send data in TCP SYN */ diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index 08f1a2cc70d2..95f4d579cbc4 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -106,6 +106,15 @@ page_pool_iov_dma_addr(const struct page_pool_iov *ppiov) ((dma_addr_t)page_pool_iov_idx(ppiov) << PAGE_SHIFT); } +static inline unsigned long +page_pool_iov_virtual_addr(const struct page_pool_iov *ppiov) +{ + struct dmabuf_genpool_chunk_owner *owner = page_pool_iov_owner(ppiov); + + return owner->base_virtual + + ((unsigned long)page_pool_iov_idx(ppiov) << PAGE_SHIFT); +} + static inline struct netdev_dmabuf_binding * page_pool_iov_binding(const struct page_pool_iov *ppiov) { diff --git a/include/net/sock.h b/include/net/sock.h index 242590308d64..986d9da6e062 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -353,6 +353,7 @@ struct sk_filter; * @sk_txtime_unused: unused txtime flags * @ns_tracker: tracker for netns reference * @sk_bind2_node: bind node in the bhash2 table + * @sk_user_pages: xarray of pages the user is holding a reference on. */ struct sock { /* @@ -545,6 +546,7 @@ struct sock { struct rcu_head sk_rcu; netns_tracker ns_tracker; struct hlist_node sk_bind2_node; + struct xarray sk_user_pages; }; enum sk_pacing { diff --git a/include/uapi/asm-generic/socket.h b/include/uapi/asm-generic/socket.h index 8ce8a39a1e5f..aacb97f16b78 100644 --- a/include/uapi/asm-generic/socket.h +++ b/include/uapi/asm-generic/socket.h @@ -135,6 +135,11 @@ #define SO_PASSPIDFD 76 #define SO_PEERPIDFD 77 +#define SO_DEVMEM_HEADER 98 +#define SCM_DEVMEM_HEADER SO_DEVMEM_HEADER +#define SO_DEVMEM_OFFSET 99 +#define SCM_DEVMEM_OFFSET SO_DEVMEM_OFFSET + #if !defined(__KERNEL__) #if __BITS_PER_LONG == 64 || (defined(__x86_64__) && defined(__ILP32__)) diff --git a/include/uapi/linux/uio.h b/include/uapi/linux/uio.h index 059b1a9147f4..ae94763b1963 100644 --- a/include/uapi/linux/uio.h +++ b/include/uapi/linux/uio.h @@ -20,6 +20,12 @@ struct iovec __kernel_size_t iov_len; /* Must be size_t (1003.1g) */ }; +struct cmsg_devmem { + __u64 frag_offset; + __u32 frag_size; + __u32 frag_token; +}; + /* * UIO_MAXIOV shall be at least 16 1003.1g (5.4.1.1) */ diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 5c6fed52ed0e..fd7f6d7e7671 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -461,6 +461,7 @@ void tcp_init_sock(struct sock *sk) set_bit(SOCK_SUPPORT_ZC, &sk->sk_socket->flags); sk_sockets_allocated_inc(sk); + xa_init_flags(&sk->sk_user_pages, XA_FLAGS_ALLOC1); } EXPORT_SYMBOL(tcp_init_sock); @@ -2301,6 +2302,154 @@ static int tcp_inq_hint(struct sock *sk) return inq; } +/* On error, returns the -errno. On success, returns number of bytes sent to the + * user. May not consume all of @remaining_len. + */ +static int tcp_recvmsg_devmem(const struct sock *sk, const struct sk_buff *skb, + unsigned int offset, struct msghdr *msg, + int remaining_len) +{ + struct cmsg_devmem cmsg_devmem = { 0 }; + unsigned int start; + int i, copy, n; + int sent = 0; + int err = 0; + + do { + start = skb_headlen(skb); + + if (!skb_frags_not_readable(skb)) { + err = -ENODEV; + goto out; + } + + /* Copy header. */ + copy = start - offset; + if (copy > 0) { + copy = min(copy, remaining_len); + + n = copy_to_iter(skb->data + offset, copy, + &msg->msg_iter); + if (n != copy) { + err = -EFAULT; + goto out; + } + + offset += copy; + remaining_len -= copy; + + /* First a cmsg_devmem for # bytes copied to user + * buffer. + */ + memset(&cmsg_devmem, 0, sizeof(cmsg_devmem)); + cmsg_devmem.frag_size = copy; + err = put_cmsg(msg, SOL_SOCKET, SO_DEVMEM_HEADER, + sizeof(cmsg_devmem), &cmsg_devmem); + if (err || msg->msg_flags & MSG_CTRUNC) { + msg->msg_flags &= ~MSG_CTRUNC; + if (!err) + err = -ETOOSMALL; + goto out; + } + + sent += copy; + + if (remaining_len == 0) + goto out; + } + + /* after that, send information of devmem pages through a + * sequence of cmsg + */ + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { + const skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; + struct page_pool_iov *ppiov; + u64 frag_offset; + u32 user_token; + int end; + + /* skb_frags_not_readable() should indicate that ALL the + * frags in this skb are unreadable page_pool_iovs. + * We're checking for that flag above, but also check + * individual pages here. If the tcp stack is not + * setting skb->devmem correctly, we still don't want to + * crash here when accessing pgmap or priv below. + */ + if (!skb_frag_page_pool_iov(frag)) { + net_err_ratelimited("Found non-devmem skb with page_pool_iov"); + err = -ENODEV; + goto out; + } + + ppiov = skb_frag_page_pool_iov(frag); + end = start + skb_frag_size(frag); + copy = end - offset; + + if (copy > 0) { + copy = min(copy, remaining_len); + + frag_offset = page_pool_iov_virtual_addr(ppiov) + + skb_frag_off(frag) + offset - + start; + cmsg_devmem.frag_offset = frag_offset; + cmsg_devmem.frag_size = copy; + err = xa_alloc((struct xarray *)&sk->sk_user_pages, + &user_token, frag->bv_page, + xa_limit_31b, GFP_KERNEL); + if (err) + goto out; + + cmsg_devmem.frag_token = user_token; + + offset += copy; + remaining_len -= copy; + + err = put_cmsg(msg, SOL_SOCKET, + SO_DEVMEM_OFFSET, + sizeof(cmsg_devmem), + &cmsg_devmem); + if (err || msg->msg_flags & MSG_CTRUNC) { + msg->msg_flags &= ~MSG_CTRUNC; + xa_erase((struct xarray *)&sk->sk_user_pages, + user_token); + if (!err) + err = -ETOOSMALL; + goto out; + } + + page_pool_iov_get_many(ppiov, 1); + + sent += copy; + + if (remaining_len == 0) + goto out; + } + start = end; + } + + if (!remaining_len) + goto out; + + /* if remaining_len is not satisfied yet, we need to go to the + * next frag in the frag_list to satisfy remaining_len. + */ + skb = skb_shinfo(skb)->frag_list ?: skb->next; + + offset = offset - start; + } while (skb); + + if (remaining_len) { + err = -EFAULT; + goto out; + } + +out: + if (!sent) + sent = err; + + return sent; +} + /* * This routine copies from a sock struct into the user buffer. * @@ -2314,6 +2463,7 @@ static int tcp_recvmsg_locked(struct sock *sk, struct msghdr *msg, size_t len, int *cmsg_flags) { struct tcp_sock *tp = tcp_sk(sk); + int last_copied_devmem = -1; /* uninitialized */ int copied = 0; u32 peek_seq; u32 *seq; @@ -2491,15 +2641,44 @@ static int tcp_recvmsg_locked(struct sock *sk, struct msghdr *msg, size_t len, } if (!(flags & MSG_TRUNC)) { - err = skb_copy_datagram_msg(skb, offset, msg, used); - if (err) { - /* Exception. Bailout! */ - if (!copied) - copied = -EFAULT; + if (last_copied_devmem != -1 && + last_copied_devmem != skb->devmem) break; + + if (!skb->devmem) { + err = skb_copy_datagram_msg(skb, offset, msg, + used); + if (err) { + /* Exception. Bailout! */ + if (!copied) + copied = -EFAULT; + break; + } + } else { + if (!(flags & MSG_SOCK_DEVMEM)) { + /* skb->devmem skbs can only be received + * with the MSG_SOCK_DEVMEM flag. + */ + if (!copied) + copied = -EFAULT; + + break; + } + + err = tcp_recvmsg_devmem(sk, skb, offset, msg, + used); + if (err <= 0) { + if (!copied) + copied = -EFAULT; + + break; + } + used = err; } } + last_copied_devmem = skb->devmem; + WRITE_ONCE(*seq, *seq + used); copied += used; len -= used; diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 7583d4e34c8c..4cc8be892f05 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -2299,6 +2299,13 @@ static int tcp_v4_init_sock(struct sock *sk) void tcp_v4_destroy_sock(struct sock *sk) { struct tcp_sock *tp = tcp_sk(sk); + struct page *page; + unsigned long index; + + xa_for_each(&sk->sk_user_pages, index, page) + page_pool_page_put_many(page, 1); + + xa_destroy(&sk->sk_user_pages); trace_tcp_destroy_sock(sk); From patchwork Mon Nov 6 02:44:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13446240 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 526CCC4167D for ; Mon, 6 Nov 2023 02:45:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231159AbjKFCpu (ORCPT ); Sun, 5 Nov 2023 21:45:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54912 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230267AbjKFCpU (ORCPT ); Sun, 5 Nov 2023 21:45:20 -0500 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B8C85170C for ; Sun, 5 Nov 2023 18:44:43 -0800 (PST) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5a839b31a0dso81591277b3.0 for ; Sun, 05 Nov 2023 18:44:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699238682; x=1699843482; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ZeiiEJbRtsTNAyw99p8ZCn3Vjnqo19hTPl7OEJ2oi/s=; b=jwEd6cs1Lg6YMrqKN8qfTAgdifr0auIiim0QBzwHITxKPJEqfJ5KZ57PSEJePl4+Nj shqMCrYBWgK1YX1VglxN66HvQBiQYSK3/VUSNq5wqEavBmIvl586WZn0Vv/FJsHh9n4r dHFr8UHhTSrChwcxYIy6iXJkXMZwoiCiX2Gw0/TBliwVWNNIPRBp7KfJVENRhdsimbyI LoIf47hQfFhuuwUolfRrIoQDWbiYnDf7+0fhTmJqtNWfMifUVHUR6vWlD8TF1bQVCGJb urkN/kt632R3lJqWZwWXq7yYzl8c4wVx9cPt+qicqaeLs/GJyLfR8uAW2tx9EFCEqTn+ qxGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699238682; x=1699843482; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ZeiiEJbRtsTNAyw99p8ZCn3Vjnqo19hTPl7OEJ2oi/s=; b=UG5bZ/6TKNks3ZDBAnHhWWP8FntQ7SSQnv0YKUZgclCq2Mpa+PnQGTJp4mWKOthDVa AdpyUYqoZiIschrJNXYilAQQbojghvQalKFKqPqEr2Y7qhrI4MWGnvMiR6YQxtKgZURD vbp8GtmcS60c1z2B+FREh+O9agp3DWe4kEkvLevIswsuv7hQV83FLl0D6/ReHAD+KkAa VTimbjVIt7L8SV6fTURa9Hjks8mwpxkoDGGxy9T+8vcipxD85FoFfX8dtCIiXx7SO5dS nMtDsHANIOr6miCt3cVwCS1OOW7wIz5EP8vlIF7xJDFMAaXWQddY0CDzgpFqoFi6C+bv Mr0g== X-Gm-Message-State: AOJu0Yy210bSMT63OorSPjXzzKKjJCifUmwz8oCxNFKu/Ju25i9z4Puh 0jFj5oSwVTfdrXA9CsIlFWph3HdxNYAoja0qiA== X-Google-Smtp-Source: AGHT+IGXB1Wk5WKMXZr8HTN6imbMYrfy8ASdQjy/ozkx0p4xonXE7b4iNBr/Mhu3s45eIhJrmpU89xDxoCb7QfNOog== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:35de:fff:97b7:db3e]) (user=almasrymina job=sendgmr) by 2002:a25:828d:0:b0:d9a:4421:6ec5 with SMTP id r13-20020a25828d000000b00d9a44216ec5mr537997ybk.3.1699238682207; Sun, 05 Nov 2023 18:44:42 -0800 (PST) Date: Sun, 5 Nov 2023 18:44:10 -0800 In-Reply-To: <20231106024413.2801438-1-almasrymina@google.com> Mime-Version: 1.0 References: <20231106024413.2801438-1-almasrymina@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231106024413.2801438-12-almasrymina@google.com> Subject: [RFC PATCH v3 11/12] net: add SO_DEVMEM_DONTNEED setsockopt to release RX pages From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Cc: Mina Almasry , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jesper Dangaard Brouer , Ilias Apalodimas , Arnd Bergmann , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi , Willem de Bruijn , Kaiyuan Zhang Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org Add an interface for the user to notify the kernel that it is done reading the NET_RX dmabuf pages returned as cmsg. The kernel will drop the reference on the NET_RX pages to make them available for re-use. Signed-off-by: Willem de Bruijn Signed-off-by: Kaiyuan Zhang Signed-off-by: Mina Almasry --- include/uapi/asm-generic/socket.h | 1 + include/uapi/linux/uio.h | 4 ++++ net/core/sock.c | 36 +++++++++++++++++++++++++++++++ 3 files changed, 41 insertions(+) diff --git a/include/uapi/asm-generic/socket.h b/include/uapi/asm-generic/socket.h index aacb97f16b78..eb93b43394d4 100644 --- a/include/uapi/asm-generic/socket.h +++ b/include/uapi/asm-generic/socket.h @@ -135,6 +135,7 @@ #define SO_PASSPIDFD 76 #define SO_PEERPIDFD 77 +#define SO_DEVMEM_DONTNEED 97 #define SO_DEVMEM_HEADER 98 #define SCM_DEVMEM_HEADER SO_DEVMEM_HEADER #define SO_DEVMEM_OFFSET 99 diff --git a/include/uapi/linux/uio.h b/include/uapi/linux/uio.h index ae94763b1963..71314bf41590 100644 --- a/include/uapi/linux/uio.h +++ b/include/uapi/linux/uio.h @@ -26,6 +26,10 @@ struct cmsg_devmem { __u32 frag_token; }; +struct devmemtoken { + __u32 token_start; + __u32 token_count; +}; /* * UIO_MAXIOV shall be at least 16 1003.1g (5.4.1.1) */ diff --git a/net/core/sock.c b/net/core/sock.c index 1d28e3e87970..4ddc6b11d915 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -1051,6 +1051,39 @@ static int sock_reserve_memory(struct sock *sk, int bytes) return 0; } +static noinline_for_stack int +sock_devmem_dontneed(struct sock *sk, sockptr_t optval, unsigned int optlen) +{ + struct devmemtoken tokens[128]; + unsigned int num_tokens, i, j; + int ret; + + if (sk->sk_type != SOCK_STREAM || sk->sk_protocol != IPPROTO_TCP) + return -EBADF; + + if (optlen % sizeof(struct devmemtoken) || optlen > sizeof(tokens)) + return -EINVAL; + + num_tokens = optlen / sizeof(struct devmemtoken); + if (copy_from_sockptr(tokens, optval, optlen)) + return -EFAULT; + + ret = 0; + for (i = 0; i < num_tokens; i++) { + for (j = 0; j < tokens[i].token_count; j++) { + struct page *page = xa_erase(&sk->sk_user_pages, + tokens[i].token_start + j); + + if (page) { + page_pool_page_put_many(page, 1); + ret++; + } + } + } + + return ret; +} + void sockopt_lock_sock(struct sock *sk) { /* When current->bpf_ctx is set, the setsockopt is called from @@ -1538,6 +1571,9 @@ int sk_setsockopt(struct sock *sk, int level, int optname, break; } + case SO_DEVMEM_DONTNEED: + ret = sock_devmem_dontneed(sk, optval, optlen); + break; default: ret = -ENOPROTOOPT; break; From patchwork Mon Nov 6 02:44:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13446241 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 974E8C4167D for ; Mon, 6 Nov 2023 02:46:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230484AbjKFCqB (ORCPT ); Sun, 5 Nov 2023 21:46:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54774 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230387AbjKFCp2 (ORCPT ); Sun, 5 Nov 2023 21:45:28 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 32B041729 for ; Sun, 5 Nov 2023 18:44:45 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5b064442464so55032757b3.3 for ; Sun, 05 Nov 2023 18:44:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1699238684; x=1699843484; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=TUP3yrFP0An+tKPrM9CHpFI1TCjxfyeIP5pvhkXR8IQ=; b=vZ2RXNpAUxRPOBvsM4V2pVr+5ay77IwkisEeR2cP5fG1zileTHT40hxZKFqyzFsC8p hWkABJgAGb7skoufKgi6BUHCobjkZDkAsYWCimjfEL2b2kD3YGQvqsR0e/y7SuHdyBXn r+eZYSwP6+oFR7OpLEfQ/hZSYd4eqTnLh4xBAICZ7yZHdDVf7YaESITC0+59Euy1VVn8 VaN9IPphJ3mbc0Kfzxk3KfAMuubDjanS/KIlNmF0cf/em5g2rUcKuWZisRW6S/C2i3t/ SyKYE1++tDNDfuqWWFVcEuXlrye+nzfrHhONGHehlj0QZ8KdhjMqHu6/4FDPKKyxL2TR 4png== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699238684; x=1699843484; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=TUP3yrFP0An+tKPrM9CHpFI1TCjxfyeIP5pvhkXR8IQ=; b=vCTLoyngY0sgD/6YrYI6ZzBj/jcoh8LOMs2BLAYgATExG++JeSPiFsKwle3LdojdDV IJI03Tt0I/pU0W4e2qUofni79VkousddXtdUS31rt+En9qo94o/Tr0cdhUuPdSgDjW4u l5I1y6h9zfEcIAmNUCWHz84KhjYKzCifInaxsvdz/Q/WWysW2tP43l8MVvI6sN+XoMYu H5D5wgu2M6vlYsD74L6r87r64JJROCpZ3B38GZjxm6ogUK1BDPuQ2Bk+O1jq51u4F5BH 1ubAfHC7f23Vzzn37/8HtFB8Ur+zUKhc2Nv3pbTX9JQqzFoHgQ74Es+22lCVC3LHT3fT WKpA== X-Gm-Message-State: AOJu0YzbncHMxVnI6J4G0nodvQaYYSLjC/hV1wAOB/8CrFntDhUuVjod /2t+S0hNdkfG4BTbNunqMxl8lEgHJ/7zKSxSqA== X-Google-Smtp-Source: AGHT+IGlKnQaLTjiFq56736Kg6KGnbLJJvMDAHLmJHlqvHe3XtgJy7m+AFG7yCUfMAeRjPJzY6RrPY2Ej7c7HoxUYg== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:35de:fff:97b7:db3e]) (user=almasrymina job=sendgmr) by 2002:a25:2083:0:b0:da0:c584:def4 with SMTP id g125-20020a252083000000b00da0c584def4mr519198ybg.1.1699238684278; Sun, 05 Nov 2023 18:44:44 -0800 (PST) Date: Sun, 5 Nov 2023 18:44:11 -0800 In-Reply-To: <20231106024413.2801438-1-almasrymina@google.com> Mime-Version: 1.0 References: <20231106024413.2801438-1-almasrymina@google.com> X-Mailer: git-send-email 2.42.0.869.gea05f2083d-goog Message-ID: <20231106024413.2801438-13-almasrymina@google.com> Subject: [RFC PATCH v3 12/12] selftests: add ncdevmem, netcat for devmem TCP From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Cc: Mina Almasry , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jesper Dangaard Brouer , Ilias Apalodimas , Arnd Bergmann , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi , Stanislav Fomichev Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org ncdevmem is a devmem TCP netcat. It works similarly to netcat, but it sends and receives data using the devmem TCP APIs. It uses udmabuf as the dmabuf provider. It is compatible with a regular netcat running on a peer, or a ncdevmem running on a peer. In addition to normal netcat support, ncdevmem has a validation mode, where it sends a specific pattern and validates this pattern on the receiver side to ensure data integrity. Suggested-by: Stanislav Fomichev Signed-off-by: Mina Almasry --- RFC v2: - General cleanups (Willem). --- tools/testing/selftests/net/.gitignore | 1 + tools/testing/selftests/net/Makefile | 5 + tools/testing/selftests/net/ncdevmem.c | 546 +++++++++++++++++++++++++ 3 files changed, 552 insertions(+) create mode 100644 tools/testing/selftests/net/ncdevmem.c diff --git a/tools/testing/selftests/net/.gitignore b/tools/testing/selftests/net/.gitignore index 2f9d378edec3..b644dbae58b7 100644 --- a/tools/testing/selftests/net/.gitignore +++ b/tools/testing/selftests/net/.gitignore @@ -17,6 +17,7 @@ ipv6_flowlabel ipv6_flowlabel_mgr log.txt msg_zerocopy +ncdevmem nettest psock_fanout psock_snd diff --git a/tools/testing/selftests/net/Makefile b/tools/testing/selftests/net/Makefile index b9804ceb9494..6c6e53c70e99 100644 --- a/tools/testing/selftests/net/Makefile +++ b/tools/testing/selftests/net/Makefile @@ -5,6 +5,10 @@ CFLAGS = -Wall -Wl,--no-as-needed -O2 -g CFLAGS += -I../../../../usr/include/ $(KHDR_INCLUDES) # Additional include paths needed by kselftest.h CFLAGS += -I../ +CFLAGS += -I../../../net/ynl/generated/ +CFLAGS += -I../../../net/ynl/lib/ + +LDLIBS += ../../../net/ynl/lib/ynl.a ../../../net/ynl/generated/protos.a TEST_PROGS := run_netsocktests run_afpackettests test_bpf.sh netdevice.sh \ rtnetlink.sh xfrm_policy.sh test_blackhole_dev.sh @@ -91,6 +95,7 @@ TEST_PROGS += test_bridge_neigh_suppress.sh TEST_PROGS += test_vxlan_nolocalbypass.sh TEST_PROGS += test_bridge_backup_port.sh TEST_PROGS += fdb_flush.sh +TEST_GEN_FILES += ncdevmem TEST_FILES := settings diff --git a/tools/testing/selftests/net/ncdevmem.c b/tools/testing/selftests/net/ncdevmem.c new file mode 100644 index 000000000000..78bc3ad767ca --- /dev/null +++ b/tools/testing/selftests/net/ncdevmem.c @@ -0,0 +1,546 @@ +// SPDX-License-Identifier: GPL-2.0 +#define _GNU_SOURCE +#define __EXPORTED_HEADERS__ + +#include +#include +#include +#include +#include +#include +#include +#define __iovec_defined +#include +#include + +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "netdev-user.h" +#include + +#define PAGE_SHIFT 12 +#define TEST_PREFIX "ncdevmem" +#define NUM_PAGES 16000 + +#ifndef MSG_SOCK_DEVMEM +#define MSG_SOCK_DEVMEM 0x2000000 +#endif + +/* + * tcpdevmem netcat. Works similarly to netcat but does device memory TCP + * instead of regular TCP. Uses udmabuf to mock a dmabuf provider. + * + * Usage: + * + * * Without validation: + * + * On server: + * ncdevmem -s -c -f eth1 -n 0000:06:00.0 -l \ + * -p 5201 + * + * On client: + * ncdevmem -s -c -f eth1 -n 0000:06:00.0 -p 5201 + * + * * With Validation: + * On server: + * ncdevmem -s -c -l -f eth1 -n 0000:06:00.0 \ + * -p 5202 -v 1 + * + * On client: + * ncdevmem -s -c -f eth1 -n 0000:06:00.0 -p 5202 \ + * -v 100000 + * + * Note this is compatible with regular netcat. i.e. the sender or receiver can + * be replaced with regular netcat to test the RX or TX path in isolation. + */ + +static char *server_ip = "192.168.1.4"; +static char *client_ip = "192.168.1.2"; +static char *port = "5201"; +static size_t do_validation; +static int queue_num = 15; +static char *ifname = "eth1"; +static char *nic_pci_addr = "0000:06:00.0"; +static unsigned int iterations; + +void print_bytes(void *ptr, size_t size) +{ + unsigned char *p = ptr; + int i; + + for (i = 0; i < size; i++) { + printf("%02hhX ", p[i]); + } + printf("\n"); +} + +void print_nonzero_bytes(void *ptr, size_t size) +{ + unsigned char *p = ptr; + unsigned int i; + + for (i = 0; i < size; i++) + putchar(p[i]); + printf("\n"); +} + +void validate_buffer(void *line, size_t size) +{ + static unsigned char seed = 1; + unsigned char *ptr = line; + int errors = 0; + size_t i; + + for (i = 0; i < size; i++) { + if (ptr[i] != seed) { + fprintf(stderr, + "Failed validation: expected=%u, actual=%u, index=%lu\n", + seed, ptr[i], i); + errors++; + if (errors > 20) + exit(1); + } + seed++; + if (seed == do_validation) + seed = 0; + } + + fprintf(stdout, "Validated buffer\n"); +} + +static void reset_flow_steering(void) +{ + char command[256]; + + memset(command, 0, sizeof(command)); + snprintf(command, sizeof(command), "sudo ethtool -K %s ntuple off", + "eth1"); + system(command); + + memset(command, 0, sizeof(command)); + snprintf(command, sizeof(command), "sudo ethtool -K %s ntuple on", + "eth1"); + system(command); +} + +static void configure_flow_steering(void) +{ + char command[256]; + + memset(command, 0, sizeof(command)); + snprintf(command, sizeof(command), + "sudo ethtool -N %s flow-type tcp4 src-ip %s dst-ip %s src-port %s dst-port %s queue %d", + ifname, client_ip, server_ip, port, port, queue_num); + system(command); +} + +/* Triggers a driver reset... + * + * The proper way to do this is probably 'ethtool --reset', but I don't have + * that supported on my current test bed. I resort to changing this + * configuration in the driver which also causes a driver reset... + */ +static void trigger_device_reset(void) +{ + char command[256]; + + memset(command, 0, sizeof(command)); + snprintf(command, sizeof(command), + "sudo ethtool --set-priv-flags %s enable-header-split off", + ifname); + system(command); + + memset(command, 0, sizeof(command)); + snprintf(command, sizeof(command), + "sudo ethtool --set-priv-flags %s enable-header-split on", + ifname); + system(command); +} + +static int bind_rx_queue(unsigned int ifindex, unsigned int dmabuf_fd, + __u32 *queue_idx, unsigned int n_queue_index, + struct ynl_sock **ys) +{ + struct netdev_bind_rx_req *req = NULL; + struct ynl_error yerr; + int ret = 0; + + *ys = ynl_sock_create(&ynl_netdev_family, &yerr); + if (!*ys) { + fprintf(stderr, "YNL: %s\n", yerr.msg); + return -1; + } + + if (ynl_subscribe(*ys, "mgmt")) + goto err_close; + + req = netdev_bind_rx_req_alloc(); + netdev_bind_rx_req_set_ifindex(req, ifindex); + netdev_bind_rx_req_set_dmabuf_fd(req, dmabuf_fd); + __netdev_bind_rx_req_set_queues(req, queue_idx, n_queue_index); + + ret = netdev_bind_rx(*ys, req); + if (!ret) { + perror("netdev_bind_rx"); + goto err_close; + } + + netdev_bind_rx_req_free(req); + + return 0; + +err_close: + fprintf(stderr, "YNL failed: %s\n", (*ys)->err.msg); + netdev_bind_rx_req_free(req); + ynl_sock_destroy(*ys); + return -1; +} + +static void create_udmabuf(int *devfd, int *memfd, int *buf, size_t dmabuf_size) +{ + struct udmabuf_create create; + int ret; + + *devfd = open("/dev/udmabuf", O_RDWR); + if (*devfd < 0) { + fprintf(stderr, + "%s: [skip,no-udmabuf: Unable to access DMA " + "buffer device file]\n", + TEST_PREFIX); + exit(70); + } + + *memfd = memfd_create("udmabuf-test", MFD_ALLOW_SEALING); + if (*memfd < 0) { + printf("%s: [skip,no-memfd]\n", TEST_PREFIX); + exit(72); + } + + ret = fcntl(*memfd, F_ADD_SEALS, F_SEAL_SHRINK); + if (ret < 0) { + printf("%s: [skip,fcntl-add-seals]\n", TEST_PREFIX); + exit(73); + } + + ret = ftruncate(*memfd, dmabuf_size); + if (ret == -1) { + printf("%s: [FAIL,memfd-truncate]\n", TEST_PREFIX); + exit(74); + } + + memset(&create, 0, sizeof(create)); + + create.memfd = *memfd; + create.offset = 0; + create.size = dmabuf_size; + *buf = ioctl(*devfd, UDMABUF_CREATE, &create); + if (*buf < 0) { + printf("%s: [FAIL, create udmabuf]\n", TEST_PREFIX); + exit(75); + } +} + +int do_server(void) +{ + char ctrl_data[sizeof(int) * 20000]; + size_t non_page_aligned_frags = 0; + struct sockaddr_in client_addr; + struct sockaddr_in server_sin; + size_t page_aligned_frags = 0; + int devfd, memfd, buf, ret; + size_t total_received = 0; + bool is_devmem = false; + char *buf_mem = NULL; + struct ynl_sock *ys; + size_t dmabuf_size; + char iobuf[819200]; + char buffer[256]; + int socket_fd; + int client_fd; + size_t i = 0; + int opt = 1; + + dmabuf_size = getpagesize() * NUM_PAGES; + + create_udmabuf(&devfd, &memfd, &buf, dmabuf_size); + + __u32 *queue_idx = malloc(sizeof(__u32) * 2); + + queue_idx[0] = 14; + queue_idx[1] = 15; + if (bind_rx_queue(3 /* index for eth1 */, buf, queue_idx, 2, &ys)) { + fprintf(stderr, "Failed to bind\n"); + exit(1); + } + + buf_mem = mmap(NULL, dmabuf_size, PROT_READ | PROT_WRITE, MAP_SHARED, + buf, 0); + if (buf_mem == MAP_FAILED) { + perror("mmap()"); + exit(1); + } + + /* Need to trigger the NIC to reallocate its RX pages, otherwise the + * bind doesn't take effect. + */ + trigger_device_reset(); + + sleep(1); + + reset_flow_steering(); + configure_flow_steering(); + + server_sin.sin_family = AF_INET; + server_sin.sin_port = htons(atoi(port)); + + ret = inet_pton(server_sin.sin_family, server_ip, &server_sin.sin_addr); + if (socket < 0) { + printf("%s: [FAIL, create socket]\n", TEST_PREFIX); + exit(79); + } + + socket_fd = socket(server_sin.sin_family, SOCK_STREAM, 0); + if (socket < 0) { + printf("%s: [FAIL, create socket]\n", TEST_PREFIX); + exit(76); + } + + ret = setsockopt(socket_fd, SOL_SOCKET, SO_REUSEPORT, &opt, + sizeof(opt)); + if (ret) { + printf("%s: [FAIL, set sock opt]: %s\n", TEST_PREFIX, + strerror(errno)); + exit(76); + } + ret = setsockopt(socket_fd, SOL_SOCKET, SO_REUSEADDR, &opt, + sizeof(opt)); + if (ret) { + printf("%s: [FAIL, set sock opt]: %s\n", TEST_PREFIX, + strerror(errno)); + exit(76); + } + ret = setsockopt(socket_fd, SOL_SOCKET, SO_ZEROCOPY, &opt, + sizeof(opt)); + if (ret) { + printf("%s: [FAIL, set sock opt]: %s\n", TEST_PREFIX, + strerror(errno)); + exit(76); + } + + printf("binding to address %s:%d\n", server_ip, + ntohs(server_sin.sin_port)); + + ret = bind(socket_fd, &server_sin, sizeof(server_sin)); + if (ret) { + printf("%s: [FAIL, bind]: %s\n", TEST_PREFIX, strerror(errno)); + exit(76); + } + + ret = listen(socket_fd, 1); + if (ret) { + printf("%s: [FAIL, listen]: %s\n", TEST_PREFIX, + strerror(errno)); + exit(76); + } + + socklen_t client_addr_len = sizeof(client_addr); + + inet_ntop(server_sin.sin_family, &server_sin.sin_addr, buffer, + sizeof(buffer)); + printf("Waiting or connection on %s:%d\n", buffer, + ntohs(server_sin.sin_port)); + client_fd = accept(socket_fd, &client_addr, &client_addr_len); + + inet_ntop(client_addr.sin_family, &client_addr.sin_addr, buffer, + sizeof(buffer)); + printf("Got connection from %s:%d\n", buffer, + ntohs(client_addr.sin_port)); + + while (1) { + struct iovec iov = { .iov_base = iobuf, + .iov_len = sizeof(iobuf) }; + struct cmsg_devmem *cmsg_devmem = NULL; + struct dma_buf_sync sync = { 0 }; + struct cmsghdr *cm = NULL; + struct msghdr msg = { 0 }; + struct devmemtoken token; + ssize_t ret; + + is_devmem = false; + printf("\n\n"); + + msg.msg_iov = &iov; + msg.msg_iovlen = 1; + msg.msg_control = ctrl_data; + msg.msg_controllen = sizeof(ctrl_data); + ret = recvmsg(client_fd, &msg, MSG_SOCK_DEVMEM); + printf("recvmsg ret=%ld\n", ret); + if (ret < 0 && (errno == EAGAIN || errno == EWOULDBLOCK)) { + continue; + } + if (ret < 0) { + perror("recvmsg"); + continue; + } + if (ret == 0) { + printf("client exited\n"); + goto cleanup; + } + + i++; + for (cm = CMSG_FIRSTHDR(&msg); cm; cm = CMSG_NXTHDR(&msg, cm)) { + if (cm->cmsg_level != SOL_SOCKET || + (cm->cmsg_type != SCM_DEVMEM_OFFSET && + cm->cmsg_type != SCM_DEVMEM_HEADER)) { + fprintf(stdout, "skipping non-devmem cmsg\n"); + continue; + } + + cmsg_devmem = (struct cmsg_devmem *)CMSG_DATA(cm); + is_devmem = true; + + if (cm->cmsg_type == SCM_DEVMEM_HEADER) { + /* TODO: process data copied from skb's linear + * buffer. + */ + fprintf(stdout, + "SCM_DEVMEM_HEADER. " + "cmsg_devmem->frag_size=%u\n", + cmsg_devmem->frag_size); + + continue; + } + + token.token_start = cmsg_devmem->frag_token; + token.token_count = 1; + + total_received += cmsg_devmem->frag_size; + printf("received frag_page=%llu, in_page_offset=%llu," + " frag_offset=%llu, frag_size=%u, token=%u" + " total_received=%lu\n", + cmsg_devmem->frag_offset >> PAGE_SHIFT, + cmsg_devmem->frag_offset % getpagesize(), + cmsg_devmem->frag_offset, cmsg_devmem->frag_size, + cmsg_devmem->frag_token, total_received); + + if (cmsg_devmem->frag_size % getpagesize()) + non_page_aligned_frags++; + else + page_aligned_frags++; + + sync.flags = DMA_BUF_SYNC_READ | DMA_BUF_SYNC_START; + ioctl(buf, DMA_BUF_IOCTL_SYNC, &sync); + + if (do_validation) + validate_buffer( + ((unsigned char *)buf_mem) + + cmsg_devmem->frag_offset, + cmsg_devmem->frag_size); + else + print_nonzero_bytes( + ((unsigned char *)buf_mem) + + cmsg_devmem->frag_offset, + cmsg_devmem->frag_size); + + sync.flags = DMA_BUF_SYNC_READ | DMA_BUF_SYNC_END; + ioctl(buf, DMA_BUF_IOCTL_SYNC, &sync); + + ret = setsockopt(client_fd, SOL_SOCKET, + SO_DEVMEM_DONTNEED, &token, + sizeof(token)); + if (ret != 1) { + perror("SO_DEVMEM_DONTNEED not enough tokens"); + exit(1); + } + } + if (!is_devmem) + printf("flow steering error\n"); + + printf("total_received=%lu\n", total_received); + } + + fprintf(stdout, "%s: ok\n", TEST_PREFIX); + + fprintf(stdout, "page_aligned_frags=%lu, non_page_aligned_frags=%lu\n", + page_aligned_frags, non_page_aligned_frags); + + fprintf(stdout, "page_aligned_frags=%lu, non_page_aligned_frags=%lu\n", + page_aligned_frags, non_page_aligned_frags); + +cleanup: + + munmap(buf_mem, dmabuf_size); + close(client_fd); + close(socket_fd); + close(buf); + close(memfd); + close(devfd); + ynl_sock_destroy(ys); + trigger_device_reset(); + + return 0; +} + +int main(int argc, char *argv[]) +{ + int is_server = 0, opt; + + while ((opt = getopt(argc, argv, "ls:c:p:v:q:f:n:i:")) != -1) { + switch (opt) { + case 'l': + is_server = 1; + break; + case 's': + server_ip = optarg; + break; + case 'c': + client_ip = optarg; + break; + case 'p': + port = optarg; + break; + case 'v': + do_validation = atoll(optarg); + break; + case 'q': + queue_num = atoi(optarg); + break; + case 'f': + ifname = optarg; + break; + case 'n': + nic_pci_addr = optarg; + break; + case 'i': + iterations = atoll(optarg); + break; + case '?': + printf("unknown option: %c\n", optopt); + break; + } + } + + for (; optind < argc; optind++) { + printf("extra arguments: %s\n", argv[optind]); + } + + if (is_server) + return do_server(); + + return 0; +}