From patchwork Sat Oct 8 05:53:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jason A. Donenfeld" X-Patchwork-Id: 13001639 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EA7F3C433F5 for ; Sat, 8 Oct 2022 05:55:55 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B9ECF10E1DB; Sat, 8 Oct 2022 05:55:54 +0000 (UTC) Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1122B10E1DB for ; Sat, 8 Oct 2022 05:55:49 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 9AB11B803F2; Sat, 8 Oct 2022 05:55:48 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7E717C433D6; Sat, 8 Oct 2022 05:55:41 +0000 (UTC) Authentication-Results: smtp.kernel.org; dkim=pass (1024-bit key) header.d=zx2c4.com header.i=@zx2c4.com header.b="VjWtB1zu" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zx2c4.com; s=20210105; t=1665208541; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=b9+Cj23d0QLjz/zJyugpHCrgyef3w8ya5Q+kavhuScs=; b=VjWtB1zuRTzZldsQaYe8MYMX+Gm1m5LxCmgBIBLVKcfKw1cyz3UBc5iHn1OwgpNb5LmPcG zRf5eka1cu1ECGen+5QjSQ8EoXJx2wc6d/ix/lwxGsc+rTuoE24d/rTtZb4ylDTqUK5oo+ k4wwe5pvV4uZmp8Uya2TKUWNbaRb8i8= Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id 75de389f (TLSv1.3:TLS_AES_256_GCM_SHA384:256:NO); Sat, 8 Oct 2022 05:55:40 +0000 (UTC) From: "Jason A. Donenfeld" To: linux-kernel@vger.kernel.org, patches@lists.linux.dev Subject: [PATCH v5 4/7] treewide: use get_random_{u8, u16}() when possible, part 2 Date: Fri, 7 Oct 2022 23:53:56 -0600 Message-Id: <20221008055359.286426-5-Jason@zx2c4.com> In-Reply-To: <20221008055359.286426-1-Jason@zx2c4.com> References: <20221008055359.286426-1-Jason@zx2c4.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-wireless@vger.kernel.org, "Jason A. Donenfeld" , x86@kernel.org, Vignesh Raghavendra , linux-doc@vger.kernel.org, Peter Zijlstra , Catalin Marinas , Dave Hansen , kernel-janitors@vger.kernel.org, KP Singh , dri-devel@lists.freedesktop.org, linux-mm@kvack.org, Eric Dumazet , netdev@vger.kernel.org, linux-mtd@lists.infradead.org, kasan-dev@googlegroups.com, "H . Peter Anvin" , Andreas Noever , WANG Xuerui , Will Deacon , Christoph Hellwig , linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, Mauro Carvalho Chehab , Herbert Xu , Daniel Borkmann , Jonathan Corbet , linux-rdma@vger.kernel.org, Michael Ellerman , Helge Deller , Huacai Chen , Hugh Dickins , Russell King , Christophe Leroy , Jozsef Kadlecsik , Jason Gunthorpe , Dave Airlie , Ulf Hansson , Paolo Abeni , "James E . J . Bottomley" , Pablo Neira Ayuso , linux-media@vger.kernel.org, Marco Elver , Kees Cook , Yury Norov , Heiko Carstens , linux-um@lists.infradead.org, linux-block@vger.kernel.org, Richard Weinberger , Borislav Petkov , linux-nvme@lists.infradead.org, loongarch@lists.linux.dev, Jakub Kicinski , Thomas Gleixner , Andy Shevchenko , Johannes Berg , linux-arm-kernel@lists.infradead.org, Jens Axboe , linux-mmc@vger.kernel.org, Thomas Bogendoerfer , Theodore Ts'o , linux-parisc@vger.kernel.org, Greg Kroah-Hartman , linux-usb@vger.kernel.org, Florian Westphal , linux-mips@vger.kernel.org, =?utf-8?q?Chri?= =?utf-8?q?stoph_B=C3=B6hmwalder?= , linux-crypto@vger.kernel.org, Jan Kara , Thomas Graf , linux-fsdevel@vger.kernel.org, Andrew Morton , linuxppc-dev@lists.ozlabs.org, "David S . Miller" Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Rather than truncate a 32-bit value to a 16-bit value or an 8-bit value, simply use the get_random_{u8,u16}() functions, which are faster than wasting the additional bytes from a 32-bit value. This was done by hand, identifying all of the places where one of the random integer functions was used in a non-32-bit context. Reviewed-by: Kees Cook Signed-off-by: Jason A. Donenfeld --- arch/s390/kernel/process.c | 2 +- lib/test_vmalloc.c | 2 +- net/ipv4/ip_output.c | 2 +- net/netfilter/nf_nat_core.c | 4 ++-- net/rds/bind.c | 2 +- net/sched/sch_sfb.c | 2 +- 6 files changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/s390/kernel/process.c b/arch/s390/kernel/process.c index 5ec78555dd2e..42af4b3aa02b 100644 --- a/arch/s390/kernel/process.c +++ b/arch/s390/kernel/process.c @@ -230,7 +230,7 @@ unsigned long arch_align_stack(unsigned long sp) static inline unsigned long brk_rnd(void) { - return (get_random_int() & BRK_RND_MASK) << PAGE_SHIFT; + return (get_random_u16() & BRK_RND_MASK) << PAGE_SHIFT; } unsigned long arch_randomize_brk(struct mm_struct *mm) diff --git a/lib/test_vmalloc.c b/lib/test_vmalloc.c index a26bbbf20e62..cf7780572f5b 100644 --- a/lib/test_vmalloc.c +++ b/lib/test_vmalloc.c @@ -80,7 +80,7 @@ static int random_size_align_alloc_test(void) int i; for (i = 0; i < test_loop_count; i++) { - rnd = prandom_u32(); + rnd = get_random_u8(); /* * Maximum 1024 pages, if PAGE_SIZE is 4096. diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c index 04e2034f2f8e..a4fbdbff14b3 100644 --- a/net/ipv4/ip_output.c +++ b/net/ipv4/ip_output.c @@ -172,7 +172,7 @@ int ip_build_and_send_pkt(struct sk_buff *skb, const struct sock *sk, * Avoid using the hashed IP ident generator. */ if (sk->sk_protocol == IPPROTO_TCP) - iph->id = (__force __be16)prandom_u32(); + iph->id = (__force __be16)get_random_u16(); else __ip_select_ident(net, iph, 1); } diff --git a/net/netfilter/nf_nat_core.c b/net/netfilter/nf_nat_core.c index 7981be526f26..57c7686ac485 100644 --- a/net/netfilter/nf_nat_core.c +++ b/net/netfilter/nf_nat_core.c @@ -468,7 +468,7 @@ static void nf_nat_l4proto_unique_tuple(struct nf_conntrack_tuple *tuple, if (range->flags & NF_NAT_RANGE_PROTO_OFFSET) off = (ntohs(*keyptr) - ntohs(range->base_proto.all)); else - off = prandom_u32(); + off = get_random_u16(); attempts = range_size; if (attempts > max_attempts) @@ -490,7 +490,7 @@ static void nf_nat_l4proto_unique_tuple(struct nf_conntrack_tuple *tuple, if (attempts >= range_size || attempts < 16) return; attempts /= 2; - off = prandom_u32(); + off = get_random_u16(); goto another_round; } diff --git a/net/rds/bind.c b/net/rds/bind.c index 5b5fb4ca8d3e..97a29172a8ee 100644 --- a/net/rds/bind.c +++ b/net/rds/bind.c @@ -104,7 +104,7 @@ static int rds_add_bound(struct rds_sock *rs, const struct in6_addr *addr, return -EINVAL; last = rover; } else { - rover = max_t(u16, prandom_u32(), 2); + rover = max_t(u16, get_random_u16(), 2); last = rover - 1; } diff --git a/net/sched/sch_sfb.c b/net/sched/sch_sfb.c index 2829455211f8..7eb70acb4d58 100644 --- a/net/sched/sch_sfb.c +++ b/net/sched/sch_sfb.c @@ -379,7 +379,7 @@ static int sfb_enqueue(struct sk_buff *skb, struct Qdisc *sch, goto enqueue; } - r = prandom_u32() & SFB_MAX_PROB; + r = get_random_u16() & SFB_MAX_PROB; if (unlikely(r < p_min)) { if (unlikely(p_min > SFB_MAX_PROB / 2)) {