From patchwork Mon Oct 10 23:06:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jason A. Donenfeld" X-Patchwork-Id: 13003328 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7473FC4332F for ; Mon, 10 Oct 2022 23:09:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230184AbiJJXJi (ORCPT ); Mon, 10 Oct 2022 19:09:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52222 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230078AbiJJXIq (ORCPT ); Mon, 10 Oct 2022 19:08:46 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4B0B77D795; Mon, 10 Oct 2022 16:08:11 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 23B6761041; Mon, 10 Oct 2022 23:07:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A0DEBC433C1; Mon, 10 Oct 2022 23:07:48 +0000 (UTC) Authentication-Results: smtp.kernel.org; dkim=pass (1024-bit key) header.d=zx2c4.com header.i=@zx2c4.com header.b="ROyrpi8L" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zx2c4.com; s=20210105; t=1665443267; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=N5SmbVVHQod3YQjtwLm60J+z+obZf2+Cacrefjm9UsY=; b=ROyrpi8LoODl2hZHYaBTWH1DrEigN/7rE4cyRMtBRhDYVScc3KRUyDEl/QpiruPMlr1AmS qxPBSFVJqiqwA7cR1XQRDJJQPNivI7+GzNvz1FV0v6OlaS9rKgcWWuPKE5Is4ckIe/6Q1o mlHN+l1jYuvIdGL9Lw1IZMZd++1k19k= Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id 823f4769 (TLSv1.3:TLS_AES_256_GCM_SHA384:256:NO); Mon, 10 Oct 2022 23:07:46 +0000 (UTC) From: "Jason A. Donenfeld" To: linux-kernel@vger.kernel.org, patches@lists.linux.dev Cc: "Jason A. Donenfeld" , Andreas Noever , Andrew Morton , Andy Shevchenko , Borislav Petkov , Catalin Marinas , =?utf-8?q?Christoph_B=C3=B6hmwalder?= , Christoph Hellwig , Christophe Leroy , Daniel Borkmann , Dave Airlie , Dave Hansen , "David S . Miller" , Eric Dumazet , Florian Westphal , Greg Kroah-Hartman , "H . Peter Anvin" , Heiko Carstens , Helge Deller , Herbert Xu , Huacai Chen , Hugh Dickins , Jakub Kicinski , "James E . J . Bottomley" , Jan Kara , Jason Gunthorpe , Jens Axboe , Johannes Berg , Jonathan Corbet , Jozsef Kadlecsik , KP Singh , Kees Cook , Marco Elver , Mauro Carvalho Chehab , Michael Ellerman , Pablo Neira Ayuso , Paolo Abeni , Peter Zijlstra , Richard Weinberger , Russell King , Theodore Ts'o , Thomas Bogendoerfer , Thomas Gleixner , Thomas Graf , Ulf Hansson , Vignesh Raghavendra , WANG Xuerui , Will Deacon , Yury Norov , dri-devel@lists.freedesktop.org, kasan-dev@googlegroups.com, kernel-janitors@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-block@vger.kernel.org, linux-crypto@vger.kernel.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-media@vger.kernel.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-mmc@vger.kernel.org, linux-mtd@lists.infradead.org, linux-nvme@lists.infradead.org, linux-parisc@vger.kernel.org, linux-rdma@vger.kernel.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-usb@vger.kernel.org, linux-wireless@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v6 4/7] treewide: use get_random_{u8,u16}() when possible, part 2 Date: Mon, 10 Oct 2022 17:06:10 -0600 Message-Id: <20221010230613.1076905-5-Jason@zx2c4.com> In-Reply-To: <20221010230613.1076905-1-Jason@zx2c4.com> References: <20221010230613.1076905-1-Jason@zx2c4.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Rather than truncate a 32-bit value to a 16-bit value or an 8-bit value, simply use the get_random_{u8,u16}() functions, which are faster than wasting the additional bytes from a 32-bit value. This was done by hand, identifying all of the places where one of the random integer functions was used in a non-32-bit context. Reviewed-by: Greg Kroah-Hartman Reviewed-by: Kees Cook Reviewed-by: Yury Norov Signed-off-by: Jason A. Donenfeld Acked-by: Heiko Carstens --- arch/s390/kernel/process.c | 2 +- drivers/mtd/nand/raw/nandsim.c | 2 +- lib/test_vmalloc.c | 2 +- net/rds/bind.c | 2 +- net/sched/sch_sfb.c | 2 +- 5 files changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/s390/kernel/process.c b/arch/s390/kernel/process.c index 5ec78555dd2e..42af4b3aa02b 100644 --- a/arch/s390/kernel/process.c +++ b/arch/s390/kernel/process.c @@ -230,7 +230,7 @@ unsigned long arch_align_stack(unsigned long sp) static inline unsigned long brk_rnd(void) { - return (get_random_int() & BRK_RND_MASK) << PAGE_SHIFT; + return (get_random_u16() & BRK_RND_MASK) << PAGE_SHIFT; } unsigned long arch_randomize_brk(struct mm_struct *mm) diff --git a/drivers/mtd/nand/raw/nandsim.c b/drivers/mtd/nand/raw/nandsim.c index 50bcf745e816..d211939c8bdd 100644 --- a/drivers/mtd/nand/raw/nandsim.c +++ b/drivers/mtd/nand/raw/nandsim.c @@ -1402,7 +1402,7 @@ static int ns_do_read_error(struct nandsim *ns, int num) static void ns_do_bit_flips(struct nandsim *ns, int num) { - if (bitflips && prandom_u32() < (1 << 22)) { + if (bitflips && get_random_u16() < (1 << 6)) { int flips = 1; if (bitflips > 1) flips = prandom_u32_max(bitflips) + 1; diff --git a/lib/test_vmalloc.c b/lib/test_vmalloc.c index a26bbbf20e62..cf7780572f5b 100644 --- a/lib/test_vmalloc.c +++ b/lib/test_vmalloc.c @@ -80,7 +80,7 @@ static int random_size_align_alloc_test(void) int i; for (i = 0; i < test_loop_count; i++) { - rnd = prandom_u32(); + rnd = get_random_u8(); /* * Maximum 1024 pages, if PAGE_SIZE is 4096. diff --git a/net/rds/bind.c b/net/rds/bind.c index 5b5fb4ca8d3e..97a29172a8ee 100644 --- a/net/rds/bind.c +++ b/net/rds/bind.c @@ -104,7 +104,7 @@ static int rds_add_bound(struct rds_sock *rs, const struct in6_addr *addr, return -EINVAL; last = rover; } else { - rover = max_t(u16, prandom_u32(), 2); + rover = max_t(u16, get_random_u16(), 2); last = rover - 1; } diff --git a/net/sched/sch_sfb.c b/net/sched/sch_sfb.c index e2389fa3cff8..0366a1a029a9 100644 --- a/net/sched/sch_sfb.c +++ b/net/sched/sch_sfb.c @@ -379,7 +379,7 @@ static int sfb_enqueue(struct sk_buff *skb, struct Qdisc *sch, goto enqueue; } - r = prandom_u32() & SFB_MAX_PROB; + r = get_random_u16() & SFB_MAX_PROB; if (unlikely(r < p_min)) { if (unlikely(p_min > SFB_MAX_PROB / 2)) {