From patchwork Mon Aug 5 21:25:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13754159 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1002A16F826 for ; Mon, 5 Aug 2024 21:25:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722893150; cv=none; b=loWSG+2cV3d8SrX3oeG1R057CWVMWivv4Pqipho8s2lyyX0l2DcfKywYbSk+i64y9rCdqb9savnRqsWFUVrStGQsoe8vTrQTU4aAbLbTeX2zGUnHo2HJcBTt65X0KhXigsFrzEBgl4YtgkBsmKftIiZhAezEdDWXbJ7/w5JOENU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1722893150; c=relaxed/simple; bh=jmLv4F75Xl/jpJD5VLO3hxCTmSzS7vkXC01JauMnpJE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=faUMpR1bicTCzmMm+s4Oo0StziNCxRdzyCvpz7+PaAdm68F7RzoIdRdCxPl8ZVNkGVSiHDA4ocak+I/EFaEoRY572s5H3m60VOhM6lNGjnDDvPwtN33jRs/BOoVgDYQM5Uy9VxaA8n8CCQyl/r2bnx47iEi8SYspSeL9TrP6f1s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=2+nqDy3p; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2+nqDy3p" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e0bebbbebeaso6150210276.3 for ; Mon, 05 Aug 2024 14:25:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722893146; x=1723497946; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EPAD5Ur64UBzw+dR2/Ss+piPDMNko7tqsTChD9iX+4s=; b=2+nqDy3pn92OKA7nqExkV43kIu0AMGbx57SBI6svfy09qpjwbm1JQQwHoVjpJs8qIr yqMfiCz89w248xV3OVSOOpwxKtefl7/Mu1j0F2y9/pE6KGeK/PdsuEJMPEwkF4064TiR nFnuyLwsFTd/oYRL0u79zUZBsYD7QcNjEgi2mtfRBMdYbVnSVTRHZiA1l7w5wimMLQRj l7wiS/bYPJIz9qpWNLHnS2kg23yhNipYzgeVaKWHu4+/hmkc9PVn/IPAGHH+t1jDWY+L 5+6ileSU1Ayud1ZrQ4DwuZSQoXKAKZWclA40GEP+97URT+eikDIlWvWA+p51/yRwKDec rCdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722893146; x=1723497946; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EPAD5Ur64UBzw+dR2/Ss+piPDMNko7tqsTChD9iX+4s=; b=OwCBD3qMsq/aRG2fX99jP00oiWliSFmKyLI15TPqT01FL9mABkkgcPJUAJbK+xqGof OGiPQu3pVBto7pAEEgBunZHAw5N+dEQBo3EbxanxSIx75yemsT1nRz12CeUrw3d/B+le p3/++VW2IQpKKVhIiJONktvhZ4k/hkzkmPozg8WE0Us7OHnW57BBrZEbSPYEmwNvhFTW Hn/Lz/TnBkxovknODA8b6RoeOSOBnajsL9dKqwE8Nxin5VJ7dvxDwGxGaAJeA6AKqwet VZ/DN5iYg20qPAjF8Q2FqL3PPfJrM19w9By7xLWnw73mFN3f/mNAUN/zT5b4i0iRLGsR vD4A== X-Forwarded-Encrypted: i=1; AJvYcCUtn758xnWwZL1tCg8dSaSVb2xXBejsQbg0Ea2GpH8fk1Dqf9l2EZxHxH3R8NJyq439OIGWhHOOq+N1lOH4F+Of6x6iDXWOZCr44wf2PYDsodjQ X-Gm-Message-State: AOJu0Yy5QT4oF8Uv7QuDCuEXbHh2Jw9crp9CfsDciEEwSJJEUb4P4AUb 7VSz5KW/OT5my0dGRtxXn6iUHc30e/J14LxKY17L0+IVuwAAD4ir35e0V4ZsHZtgtR1AKGOT6N+ utRdalIcPq63LJ7tBkhh/3g== X-Google-Smtp-Source: AGHT+IH0RU/e4tPzuY6gHMBCbxQ9O3sAuZ9WdvE+JFO0gn6u1rTmVfYO34GFiNqxaU1jqlT0hpBgG6x1h8QIfNDi1A== X-Received: from almasrymina.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:4bc5]) (user=almasrymina job=sendgmr) by 2002:a5b:ec8:0:b0:e0b:b7e5:4570 with SMTP id 3f1490d57ef6-e0bde483a53mr198569276.12.1722893145712; Mon, 05 Aug 2024 14:25:45 -0700 (PDT) Date: Mon, 5 Aug 2024 21:25:17 +0000 In-Reply-To: <20240805212536.2172174-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240805212536.2172174-1-almasrymina@google.com> X-Mailer: git-send-email 2.46.0.rc2.264.g509ed76dc8-goog Message-ID: <20240805212536.2172174-5-almasrymina@google.com> Subject: [PATCH net-next v18 04/14] netdev: netdevice devmem allocator From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-kselftest@vger.kernel.org, bpf@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: Mina Almasry , Donald Hunter , Jakub Kicinski , "David S. Miller" , Eric Dumazet , Paolo Abeni , Jonathan Corbet , Richard Henderson , Ivan Kokshaysky , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Andreas Larsson , Jesper Dangaard Brouer , Ilias Apalodimas , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Arnd Bergmann , Steffen Klassert , Herbert Xu , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Bagas Sanjaya , Christoph Hellwig , Nikolay Aleksandrov , Taehee Yoo , Pavel Begunkov , David Wei , Jason Gunthorpe , Yunsheng Lin , Shailend Chand , Harshitha Ramamurthy , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi , Willem de Bruijn , Kaiyuan Zhang Implement netdev devmem allocator. The allocator takes a given struct netdev_dmabuf_binding as input and allocates net_iov from that binding. The allocation simply delegates to the binding's genpool for the allocation logic and wraps the returned memory region in a net_iov struct. Signed-off-by: Willem de Bruijn Signed-off-by: Kaiyuan Zhang Signed-off-by: Mina Almasry Reviewed-by: Pavel Begunkov --- v17: - Don't acquire a binding ref for every allocation (Jakub). v11: - Fix extraneous inline directive (Paolo) v8: - Rename netdev_dmabuf_binding -> net_devmem_dmabuf_binding to avoid patch-by-patch build error. - Move niov->pp_magic/pp/pp_ref_counter usage to later patch to avoid patch-by-patch build error. v7: - netdev_ -> net_devmem_* naming (Yunsheng). v6: - Add comment on net_iov_dma_addr to explain why we don't use niov->dma_addr (Pavel) - Refactor new functions into net/core/devmem.c (Pavel) v1: - Rename devmem -> dmabuf (David). --- include/net/devmem.h | 13 +++++++++++++ include/net/netmem.h | 18 ++++++++++++++++++ net/core/devmem.c | 40 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 71 insertions(+) diff --git a/include/net/devmem.h b/include/net/devmem.h index c7bd6a0a6b9e9..2e7cc46d4d3ca 100644 --- a/include/net/devmem.h +++ b/include/net/devmem.h @@ -69,7 +69,20 @@ void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding); int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx, struct net_devmem_dmabuf_binding *binding); void dev_dmabuf_uninstall(struct net_device *dev); +struct net_iov * +net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding); +void net_devmem_free_dmabuf(struct net_iov *ppiov); #else +static inline struct net_iov * +net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding) +{ + return NULL; +} + +static inline void net_devmem_free_dmabuf(struct net_iov *ppiov) +{ +} + static inline void __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding) { diff --git a/include/net/netmem.h b/include/net/netmem.h index 41e96c2f94b5c..664df8325ece5 100644 --- a/include/net/netmem.h +++ b/include/net/netmem.h @@ -14,8 +14,26 @@ struct net_iov { struct dmabuf_genpool_chunk_owner *owner; + unsigned long dma_addr; }; +static inline struct dmabuf_genpool_chunk_owner * +net_iov_owner(const struct net_iov *niov) +{ + return niov->owner; +} + +static inline unsigned int net_iov_idx(const struct net_iov *niov) +{ + return niov - net_iov_owner(niov)->niovs; +} + +static inline struct net_devmem_dmabuf_binding * +net_iov_binding(const struct net_iov *niov) +{ + return net_iov_owner(niov)->binding; +} + /* netmem */ /** diff --git a/net/core/devmem.c b/net/core/devmem.c index 9a357235bde8f..3f73d0bda023f 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -32,6 +32,14 @@ static void net_devmem_dmabuf_free_chunk_owner(struct gen_pool *genpool, kfree(owner); } +static dma_addr_t net_devmem_get_dma_addr(const struct net_iov *niov) +{ + struct dmabuf_genpool_chunk_owner *owner = net_iov_owner(niov); + + return owner->base_dma_addr + + ((dma_addr_t)net_iov_idx(niov) << PAGE_SHIFT); +} + void __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding) { size_t size, avail; @@ -54,6 +62,38 @@ void __net_devmem_dmabuf_binding_free(struct net_devmem_dmabuf_binding *binding) kfree(binding); } +struct net_iov * +net_devmem_alloc_dmabuf(struct net_devmem_dmabuf_binding *binding) +{ + struct dmabuf_genpool_chunk_owner *owner; + unsigned long dma_addr; + struct net_iov *niov; + ssize_t offset; + ssize_t index; + + dma_addr = gen_pool_alloc_owner(binding->chunk_pool, PAGE_SIZE, + (void **)&owner); + if (!dma_addr) + return NULL; + + offset = dma_addr - owner->base_dma_addr; + index = offset / PAGE_SIZE; + niov = &owner->niovs[index]; + + niov->dma_addr = 0; + + return niov; +} + +void net_devmem_free_dmabuf(struct net_iov *niov) +{ + struct net_devmem_dmabuf_binding *binding = net_iov_binding(niov); + unsigned long dma_addr = net_devmem_get_dma_addr(niov); + + if (gen_pool_has_addr(binding->chunk_pool, dma_addr, PAGE_SIZE)) + gen_pool_free(binding->chunk_pool, dma_addr, PAGE_SIZE); +} + /* Protected by rtnl_lock() */ static DEFINE_XARRAY_FLAGS(net_devmem_dmabuf_bindings, XA_FLAGS_ALLOC1);