From patchwork Tue Jun 7 17:17:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Dumazet X-Patchwork-Id: 12872187 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E0B5CCA47E for ; Tue, 7 Jun 2022 17:18:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345507AbiFGRSD (ORCPT ); Tue, 7 Jun 2022 13:18:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34702 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345504AbiFGRR7 (ORCPT ); Tue, 7 Jun 2022 13:17:59 -0400 Received: from mail-pf1-x434.google.com (mail-pf1-x434.google.com [IPv6:2607:f8b0:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C5DE74DC5 for ; Tue, 7 Jun 2022 10:17:56 -0700 (PDT) Received: by mail-pf1-x434.google.com with SMTP id 187so16068639pfu.9 for ; Tue, 07 Jun 2022 10:17:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=57cyzJoBPE7FvApVXx2h64j/hlX4+tffn7QLQfhu8gE=; b=kJGzch64mvLQOnlCNITM6GUuQRrL0SxXbvTx+fLHOdB3rEDv1Mrn082kRx3FOvDFg+ gUbaVQCLRVkXQVQbi5hRqxMIkflL8itvSyFD2hosUiEBwdQsbTd/4b4d4lYLAwMSjVOV PHpAOhE63SXeVGtLJjZGAUOG0T0hyCcvNNalM5kaS3ZBncURDi942uW8gAITKNf+xRX2 B3HkFu8G3hrvaunW7Gw9gDDvfxGDvzYJcVfyezFkWM1wAk57Q67BY+KXBAcQdI3qerWG 3J0i/AQvlqrW4tGLi8U0NeAbosd5C6RYJZWI8FZ4Pz4axjxX+jIkhhkEpqUXPZHTVKj9 0asw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=57cyzJoBPE7FvApVXx2h64j/hlX4+tffn7QLQfhu8gE=; b=VrNxmug84MtbnJ/1aPEns+6L2Np60ENgzFGqXz7siiOkywDamCm0tVs/DrP4al/BL2 Lz1Gnv+TihvGg8t9sgc2ILXm/lNu2uVuC4SRwo3yRsk7wlivB+uoQXiPTnzOInZPVFR9 JekdZL8HwaIy30B5EbSvebjivVdnkSMsoLSe+wK8PVFtH6tfW80tmOCNj0QHbe604zRZ 7EUgnPcRr936Al27sQz/y5EpyuUI0VUrfkldITc8az5za7qwfg4RpGO5w/0XEPQaF+nH PW7Uh2iFRn/68NnJiXkJZk20XpmtPaGSD7R7dBNm2eYhSLsw8RDzcltr+eE9/LWOBQJp kxYg== X-Gm-Message-State: AOAM530yEAqwuhOAZl7N2bCIpYqZRDJ4hkQFalOx/NNTxL1K8IvxBiVI GyWe9fifiXhYR4HoLzgIl4ogK7KYkzE= X-Google-Smtp-Source: ABdhPJxzIZfN5DeDanoJurkdVMap8nE6NfDYu0RbLDJiBOIY2WB8ulxkeVBbNSIkzsZacxUnpLaexw== X-Received: by 2002:a63:5711:0:b0:3fd:b97e:3c0c with SMTP id l17-20020a635711000000b003fdb97e3c0cmr10241989pgb.570.1654622276063; Tue, 07 Jun 2022 10:17:56 -0700 (PDT) Received: from edumazet1.svl.corp.google.com ([2620:15c:2c4:201:191a:13a7:b80a:f36e]) by smtp.gmail.com with ESMTPSA id d4-20020a621d04000000b0051b930b7bbesm13001616pfd.135.2022.06.07.10.17.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jun 2022 10:17:55 -0700 (PDT) From: Eric Dumazet To: "David S . Miller" , Jakub Kicinski , Paolo Abeni Cc: netdev , Eric Dumazet , Eric Dumazet Subject: [PATCH net-next 8/8] net: add napi_get_frags_check() helper Date: Tue, 7 Jun 2022 10:17:32 -0700 Message-Id: <20220607171732.21191-9-eric.dumazet@gmail.com> X-Mailer: git-send-email 2.36.1.255.ge46751e96f-goog In-Reply-To: <20220607171732.21191-1-eric.dumazet@gmail.com> References: <20220607171732.21191-1-eric.dumazet@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Eric Dumazet This is a follow up of commit 3226b158e67c ("net: avoid 32 x truesize under-estimation for tiny skbs") When/if we increase MAX_SKB_FRAGS, we better make sure the old bug will not come back. Adding a check in napi_get_frags() would be costly, even if using DEBUG_NET_WARN_ON_ONCE(). Signed-off-by: Eric Dumazet --- net/core/dev.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/net/core/dev.c b/net/core/dev.c index 27ad09ad80a4550097ce4d113719a558b5e2a811..4ce9b2563a116066d85bae7a862e38fb160ef0e2 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -6351,6 +6351,23 @@ int dev_set_threaded(struct net_device *dev, bool threaded) } EXPORT_SYMBOL(dev_set_threaded); +/* Double check that napi_get_frags() allocates skbs with + * skb->head being backed by slab, not a page fragment. + * This is to make sure bug fixed in 3226b158e67c + * ("net: avoid 32 x truesize under-estimation for tiny skbs") + * does not accidentally come back. + */ +static void napi_get_frags_check(struct napi_struct *napi) +{ + struct sk_buff *skb; + + local_bh_disable(); + skb = napi_get_frags(napi); + WARN_ON_ONCE(skb && skb->head_frag); + napi_free_frags(napi); + local_bh_enable(); +} + void netif_napi_add_weight(struct net_device *dev, struct napi_struct *napi, int (*poll)(struct napi_struct *, int), int weight) { @@ -6378,6 +6395,7 @@ void netif_napi_add_weight(struct net_device *dev, struct napi_struct *napi, set_bit(NAPI_STATE_NPSVC, &napi->state); list_add_rcu(&napi->dev_list, &dev->napi_list); napi_hash_add(napi); + napi_get_frags_check(napi); /* Create kthread for this napi if dev->threaded is set. * Clear dev->threaded if kthread creation failed so that * threaded mode will not be enabled in napi_enable().