diff mbox series

[net] net/core/dev.c: Ensure pfmemalloc skbs are correctly handled when receiving

Message ID 20210417000839.6618-1-xie.he.0141@gmail.com (mailing list archive)
State Rejected
Delegated to: Netdev Maintainers
Headers show
Series [net] net/core/dev.c: Ensure pfmemalloc skbs are correctly handled when receiving | expand

Checks

Context Check Description
netdev/cover_letter success Link
netdev/fixes_present success Link
netdev/patch_count success Link
netdev/tree_selection success Clearly marked for net
netdev/subject_prefix success Link
netdev/cc_maintainers warning 1 maintainers not CCed: daniel@iogearbox.net
netdev/source_inline success Was 0 now: 0
netdev/verify_signedoff success Link
netdev/module_param success Was 0 now: 0
netdev/build_32bit success Errors and warnings before: 10 this patch: 10
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/verify_fixes success Link
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 16 lines checked
netdev/build_allmodconfig_warn success Errors and warnings before: 10 this patch: 10
netdev/header_inline success Link

Commit Message

Xie He April 17, 2021, 12:08 a.m. UTC
When an skb is allocated by "__netdev_alloc_skb" in "net/core/skbuff.c",
if "sk_memalloc_socks()" is true, and if there's not sufficient memory,
the skb would be allocated using emergency memory reserves. This kind of
skbs are called pfmemalloc skbs.

pfmemalloc skbs must be specially handled in "net/core/dev.c" when
receiving. They must NOT be delivered to the target protocol if
"skb_pfmemalloc_protocol(skb)" is false.

However, if, after a pfmemalloc skb is allocated and before it reaches
the code in "__netif_receive_skb", "sk_memalloc_socks()" becomes false,
then the skb will be handled by "__netif_receive_skb" as a normal skb.
This causes the skb to be delivered to the target protocol even if
"skb_pfmemalloc_protocol(skb)" is false.

This patch fixes this problem by ensuring all pfmemalloc skbs are handled
by "__netif_receive_skb" as pfmemalloc skbs.

"__netif_receive_skb_list" has the same problem as "__netif_receive_skb".
This patch also fixes it.

Fixes: b4b9e3558508 ("netvm: set PF_MEMALLOC as appropriate during SKB processing")
Cc: Mel Gorman <mgorman@suse.de>
Cc: David S. Miller <davem@davemloft.net>
Cc: Neil Brown <neilb@suse.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Jiri Slaby <jslaby@suse.cz>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Cc: Eric B Munson <emunson@mgebm.net>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
Cc: Christoph Lameter <cl@linux.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Xie He <xie.he.0141@gmail.com>
---
 net/core/dev.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Eric Dumazet April 17, 2021, 4:52 a.m. UTC | #1
On Sat, Apr 17, 2021 at 2:08 AM Xie He <xie.he.0141@gmail.com> wrote:
>
> When an skb is allocated by "__netdev_alloc_skb" in "net/core/skbuff.c",
> if "sk_memalloc_socks()" is true, and if there's not sufficient memory,
> the skb would be allocated using emergency memory reserves. This kind of
> skbs are called pfmemalloc skbs.
>
> pfmemalloc skbs must be specially handled in "net/core/dev.c" when
> receiving. They must NOT be delivered to the target protocol if
> "skb_pfmemalloc_protocol(skb)" is false.
>
> However, if, after a pfmemalloc skb is allocated and before it reaches
> the code in "__netif_receive_skb", "sk_memalloc_socks()" becomes false,
> then the skb will be handled by "__netif_receive_skb" as a normal skb.
> This causes the skb to be delivered to the target protocol even if
> "skb_pfmemalloc_protocol(skb)" is false.
>
> This patch fixes this problem by ensuring all pfmemalloc skbs are handled
> by "__netif_receive_skb" as pfmemalloc skbs.
>
> "__netif_receive_skb_list" has the same problem as "__netif_receive_skb".
> This patch also fixes it.
>
> Fixes: b4b9e3558508 ("netvm: set PF_MEMALLOC as appropriate during SKB processing")
> Cc: Mel Gorman <mgorman@suse.de>
> Cc: David S. Miller <davem@davemloft.net>
> Cc: Neil Brown <neilb@suse.de>
> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
> Cc: Jiri Slaby <jslaby@suse.cz>
> Cc: Mike Christie <michaelc@cs.wisc.edu>
> Cc: Eric B Munson <emunson@mgebm.net>
> Cc: Eric Dumazet <eric.dumazet@gmail.com>
> Cc: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
> Cc: Christoph Lameter <cl@linux.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Signed-off-by: Xie He <xie.he.0141@gmail.com>
> ---
>  net/core/dev.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/net/core/dev.c b/net/core/dev.c
> index 1f79b9aa9a3f..3e6b7879daef 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -5479,7 +5479,7 @@ static int __netif_receive_skb(struct sk_buff *skb)
>  {
>         int ret;
>
> -       if (sk_memalloc_socks() && skb_pfmemalloc(skb)) {
> +       if (skb_pfmemalloc(skb)) {
>                 unsigned int noreclaim_flag;
>
>                 /*
> @@ -5507,7 +5507,7 @@ static void __netif_receive_skb_list(struct list_head *head)
>         bool pfmemalloc = false; /* Is current sublist PF_MEMALLOC? */
>
>         list_for_each_entry_safe(skb, next, head, list) {
> -               if ((sk_memalloc_socks() && skb_pfmemalloc(skb)) != pfmemalloc) {
> +               if (skb_pfmemalloc(skb) != pfmemalloc) {
>                         struct list_head sublist;
>
>                         /* Handle the previous sublist */
> --
> 2.27.0
>

The race window has been considered to be small that we prefer the
code as it is.

The reason why we prefer current code is that we use a static key for
the implementation
of sk_memalloc_socks()

Trading some minor condition (race) with extra cycles for each
received packet is a serious concern.

What matters is a persistent condition that would _deplete_ memory,
not for a dozen of packets,
but thousands. Can you demonstrate such an issue ?
diff mbox series

Patch

diff --git a/net/core/dev.c b/net/core/dev.c
index 1f79b9aa9a3f..3e6b7879daef 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -5479,7 +5479,7 @@  static int __netif_receive_skb(struct sk_buff *skb)
 {
 	int ret;
 
-	if (sk_memalloc_socks() && skb_pfmemalloc(skb)) {
+	if (skb_pfmemalloc(skb)) {
 		unsigned int noreclaim_flag;
 
 		/*
@@ -5507,7 +5507,7 @@  static void __netif_receive_skb_list(struct list_head *head)
 	bool pfmemalloc = false; /* Is current sublist PF_MEMALLOC? */
 
 	list_for_each_entry_safe(skb, next, head, list) {
-		if ((sk_memalloc_socks() && skb_pfmemalloc(skb)) != pfmemalloc) {
+		if (skb_pfmemalloc(skb) != pfmemalloc) {
 			struct list_head sublist;
 
 			/* Handle the previous sublist */