From patchwork Wed Jul 3 10:18:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Khoronzhuk X-Patchwork-Id: 11029303 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 37574112C for ; Wed, 3 Jul 2019 10:19:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 27977288E2 for ; Wed, 3 Jul 2019 10:19:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1B5A3288E4; Wed, 3 Jul 2019 10:19:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E16FC288E6 for ; Wed, 3 Jul 2019 10:19:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727103AbfGCKTL (ORCPT ); Wed, 3 Jul 2019 06:19:11 -0400 Received: from mail-lj1-f194.google.com ([209.85.208.194]:40287 "EHLO mail-lj1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726977AbfGCKTK (ORCPT ); Wed, 3 Jul 2019 06:19:10 -0400 Received: by mail-lj1-f194.google.com with SMTP id a21so1789639ljh.7 for ; Wed, 03 Jul 2019 03:19:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=MDAbzUanknf4PJsLEUbC0kOM1La2C83H5dJbGwjf9oQ=; b=BStuIp4lx39QUwMNDApesZDWOux56RQyzibqQT0WtE7sdrznGvAb4wfLUKg7QKY9EF 0eTCQlVTVpGq5zmjqgJk8ING+02J+P6W9QmFsv+oIeQPaTglC1bDZYjQPQixp3d4U24n DN6gdMxbbr2uYOX5bxg/rCjNL0oMBci2siqMJpCsLP26JhrG+o4405TwBeJV8XvylUWE jeENo2O1Epssy5gF3zvp7rBSkac1hWxWxGFAkvJetrnYBRoVktwsQm9tnXLOFv5P72vK vW4oKGQlH7i00io624xFqcPZBRXTRgB1SH5a/Ei8yX76FyIDLgSvSVdNj3sq1w7xdEY2 lPUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=MDAbzUanknf4PJsLEUbC0kOM1La2C83H5dJbGwjf9oQ=; b=CHE+ZjtI5HEGc6oBPhWgTYN40j1XgR2v8OZ2Wa1cPrhguzAw3J0tZPatIMm//tabQu ClVGRHhYodB1yL+mvhpB8HhUPHCmPj00fvS2oJj9reVeXGBFBIF+hfbbrL79uz6IT6G2 gvvFgGou4ewUHnHzCvOwLbbsWy+nBPcw459HdUkNdE0ns3sR0dezWEt6o324ifWLMpe2 AMX1kC/BGHJg/TUDH2D0ds9HIUjdRGfLE4KoVN90ZOYfevFgmWAIgEFusO5rqAepsX4d xzq9yHyKFAITkRSvfSvXEZQqVMcEEsZdOg2od5sd9IhoOq5zs9OxxUaJ2acPpDdw/wri rLmw== X-Gm-Message-State: APjAAAU0fkFhvpBbBoyL9/TpUtfl32VVg5xKKolR20XQHC2GN7czZXY3 emVhHcfz2fkpPOYokb5gRcmynw== X-Google-Smtp-Source: APXvYqxN1wVjNfcQKjJAJHaRYIheYjov2AZ1vEh8atpfySg/spYnhrwoPYJrkeC9q18VFn1eFK4sig== X-Received: by 2002:a2e:8396:: with SMTP id x22mr7001525ljg.135.1562149148704; Wed, 03 Jul 2019 03:19:08 -0700 (PDT) Received: from localhost.localdomain (59-201-94-178.pool.ukrtel.net. [178.94.201.59]) by smtp.gmail.com with ESMTPSA id i9sm67267lfl.10.2019.07.03.03.19.07 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Wed, 03 Jul 2019 03:19:08 -0700 (PDT) From: Ivan Khoronzhuk To: grygorii.strashko@ti.com, hawk@kernel.org, davem@davemloft.net Cc: ast@kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, xdp-newbies@vger.kernel.org, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, daniel@iogearbox.net, jakub.kicinski@netronome.com, john.fastabend@gmail.com, Ivan Khoronzhuk Subject: [PATCH v6 net-next 1/5] xdp: allow same allocator usage Date: Wed, 3 Jul 2019 13:18:59 +0300 Message-Id: <20190703101903.8411-2-ivan.khoronzhuk@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190703101903.8411-1-ivan.khoronzhuk@linaro.org> References: <20190703101903.8411-1-ivan.khoronzhuk@linaro.org> Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP First of all, it is an absolute requirement that each RX-queue have their own page_pool object/allocator. And this change is intendant to handle special case, where a single RX-queue can receive packets from two different net_devices. In order to protect against using same allocator for 2 different rx queues, add queue_index to xdp_mem_allocator to catch the obvious mistake where queue_index mismatch, as proposed by Jesper Dangaard Brouer. Adding this on xdp allocator level allows drivers with such dependency change the allocators w/o modifications. Signed-off-by: Ivan Khoronzhuk --- include/net/xdp_priv.h | 2 ++ net/core/xdp.c | 55 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 57 insertions(+) diff --git a/include/net/xdp_priv.h b/include/net/xdp_priv.h index 6a8cba6ea79a..9858a4057842 100644 --- a/include/net/xdp_priv.h +++ b/include/net/xdp_priv.h @@ -18,6 +18,8 @@ struct xdp_mem_allocator { struct rcu_head rcu; struct delayed_work defer_wq; unsigned long defer_warn; + unsigned long refcnt; + u32 queue_index; }; #endif /* __LINUX_NET_XDP_PRIV_H__ */ diff --git a/net/core/xdp.c b/net/core/xdp.c index 829377cc83db..4f0ddbb3717a 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -98,6 +98,18 @@ static bool __mem_id_disconnect(int id, bool force) WARN(1, "Request remove non-existing id(%d), driver bug?", id); return true; } + + /* to avoid calling hash lookup twice, decrement refcnt here till it + * reaches zero, then it can be called from workqueue afterwards. + */ + if (xa->refcnt) + xa->refcnt--; + + if (xa->refcnt) { + mutex_unlock(&mem_id_lock); + return true; + } + xa->disconnect_cnt++; /* Detects in-flight packet-pages for page_pool */ @@ -312,6 +324,33 @@ static bool __is_supported_mem_type(enum xdp_mem_type type) return true; } +static struct xdp_mem_allocator *xdp_allocator_find(void *allocator) +{ + struct xdp_mem_allocator *xae, *xa = NULL; + struct rhashtable_iter iter; + + if (!allocator) + return xa; + + rhashtable_walk_enter(mem_id_ht, &iter); + do { + rhashtable_walk_start(&iter); + + while ((xae = rhashtable_walk_next(&iter)) && !IS_ERR(xae)) { + if (xae->allocator == allocator) { + xa = xae; + break; + } + } + + rhashtable_walk_stop(&iter); + + } while (xae == ERR_PTR(-EAGAIN)); + rhashtable_walk_exit(&iter); + + return xa; +} + int xdp_rxq_info_reg_mem_model(struct xdp_rxq_info *xdp_rxq, enum xdp_mem_type type, void *allocator) { @@ -347,6 +386,20 @@ int xdp_rxq_info_reg_mem_model(struct xdp_rxq_info *xdp_rxq, } } + mutex_lock(&mem_id_lock); + xdp_alloc = xdp_allocator_find(allocator); + if (xdp_alloc) { + /* One allocator per queue is supposed only */ + if (xdp_alloc->queue_index != xdp_rxq->queue_index) + return -EINVAL; + + xdp_rxq->mem.id = xdp_alloc->mem.id; + xdp_alloc->refcnt++; + mutex_unlock(&mem_id_lock); + return 0; + } + mutex_unlock(&mem_id_lock); + xdp_alloc = kzalloc(sizeof(*xdp_alloc), gfp); if (!xdp_alloc) return -ENOMEM; @@ -360,6 +413,8 @@ int xdp_rxq_info_reg_mem_model(struct xdp_rxq_info *xdp_rxq, xdp_rxq->mem.id = id; xdp_alloc->mem = xdp_rxq->mem; xdp_alloc->allocator = allocator; + xdp_alloc->refcnt = 1; + xdp_alloc->queue_index = xdp_rxq->queue_index; /* Insert allocator into ID lookup table */ ptr = rhashtable_insert_slow(mem_id_ht, &id, &xdp_alloc->node);