Message ID | 20211105014953.972946-3-dima@arista.com (mailing list archive) |
---|---|
State | Changes Requested |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | tcp/md5: Generic tcp_sig_pool | expand |
On Thu, Nov 4, 2021 at 6:50 PM Dmitry Safonov <dima@arista.com> wrote: > > In quite unlikely scenario when __tcp_alloc_md5sig_pool() succeeded in > crypto_alloc_ahash(), but later failed to allocate per-cpu request or > scratch area ahash will be leaked. > In theory it can happen multiple times in OOM condition for every > setsockopt(TCP_MD5SIG{,_EXT}). Then store it in a global, like the other parts ? This makes the patch smaller, and hopefully the allocations will eventually succeed, one at a time. Bug fixes should target net tree, with a Fixes: tag, not buried in apatch series targeting net-next (which is closed btw) Thanks. diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index b461ae573afc82a2c37321b13c2d76f61cd13b53..e2353e35693935fb5abd7da4531c98b86fd35e1c 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -4260,13 +4260,14 @@ static bool tcp_md5sig_pool_populated = false; static void __tcp_alloc_md5sig_pool(void) { - struct crypto_ahash *hash; + static struct crypto_ahash *hash; int cpu; - hash = crypto_alloc_ahash("md5", 0, CRYPTO_ALG_ASYNC); - if (IS_ERR(hash)) - return; - + if (IS_ERR_OR_NULL(hash)) { + hash = crypto_alloc_ahash("md5", 0, CRYPTO_ALG_ASYNC); + if (IS_ERR(hash)) + return; + } for_each_possible_cpu(cpu) { void *scratch = per_cpu(tcp_md5sig_pool, cpu).scratch; struct ahash_request *req;
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index c0856a6af9f5..eb478028b1ea 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -4276,15 +4276,13 @@ static void __tcp_alloc_md5sig_pool(void) GFP_KERNEL, cpu_to_node(cpu)); if (!scratch) - return; + goto out_free; per_cpu(tcp_md5sig_pool, cpu).scratch = scratch; } - if (per_cpu(tcp_md5sig_pool, cpu).md5_req) - continue; req = ahash_request_alloc(hash, GFP_KERNEL); if (!req) - return; + goto out_free; ahash_request_set_callback(req, 0, NULL, NULL); @@ -4295,6 +4293,16 @@ static void __tcp_alloc_md5sig_pool(void) */ smp_wmb(); tcp_md5sig_pool_populated = true; + return; + +out_free: + for_each_possible_cpu(cpu) { + if (per_cpu(tcp_md5sig_pool, cpu).md5_req == NULL) + break; + ahash_request_free(per_cpu(tcp_md5sig_pool, cpu).md5_req); + per_cpu(tcp_md5sig_pool, cpu).md5_req = NULL; + } + crypto_free_ahash(hash); } bool tcp_alloc_md5sig_pool(void)
In quite unlikely scenario when __tcp_alloc_md5sig_pool() succeeded in crypto_alloc_ahash(), but later failed to allocate per-cpu request or scratch area ahash will be leaked. In theory it can happen multiple times in OOM condition for every setsockopt(TCP_MD5SIG{,_EXT}). Add a clean-up path to free ahash. Signed-off-by: Dmitry Safonov <dima@arista.com> --- net/ipv4/tcp.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-)