From patchwork Sat May 14 12:05:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jason A. Donenfeld" X-Patchwork-Id: 12849774 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F33B3C433F5 for ; Sat, 14 May 2022 12:06:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 305286B0073; Sat, 14 May 2022 08:06:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2B48B6B0075; Sat, 14 May 2022 08:06:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 17DE86B0078; Sat, 14 May 2022 08:06:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 081B56B0073 for ; Sat, 14 May 2022 08:06:05 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id D12AA31F27 for ; Sat, 14 May 2022 12:06:04 +0000 (UTC) X-FDA: 79464220248.02.2F3059D Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf14.hostedemail.com (Postfix) with ESMTP id D87011000BD for ; Sat, 14 May 2022 12:06:01 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 5C9E460E2C; Sat, 14 May 2022 12:06:03 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3712AC340EE; Sat, 14 May 2022 12:06:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zx2c4.com; s=20210105; t=1652529960; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=33xo47b+CaNNZCbSD2Thvjo0BTvMqq1Kl9unuXlBysM=; b=ogpmZNebYf3AyGk6kWl19WuIsLINOSWNN4yF67eaPmkjbydeIN8xEmbMe4NDp18Dgc4U/O KW/ikKFP2iWm6gLidtn1DNF0KazuxvcA2+JBVYRzaE5AYvWMpV+Sq1U2XKhDs24+eqCwb2 B6qdmB7u/ofLaKvbSOq8tcnS6wmYsNQ= Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id edbeab8f (TLSv1.3:AEAD-AES256-GCM-SHA384:256:NO); Sat, 14 May 2022 12:05:59 +0000 (UTC) From: "Jason A. Donenfeld" To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, akpm@linux-foundation.org Cc: "Jason A. Donenfeld" Subject: [PATCH] random: move randomize_page() into mm where it belongs Date: Sat, 14 May 2022 14:05:56 +0200 Message-Id: <20220514120556.363559-1-Jason@zx2c4.com> MIME-Version: 1.0 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: D87011000BD Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=zx2c4.com header.s=20210105 header.b=ogpmZNeb; spf=pass (imf14.hostedemail.com: domain of "SRS0=ZbK7=VW=zx2c4.com=Jason@kernel.org" designates 139.178.84.217 as permitted sender) smtp.mailfrom="SRS0=ZbK7=VW=zx2c4.com=Jason@kernel.org"; dmarc=pass (policy=none) header.from=zx2c4.com X-Rspam-User: X-Stat-Signature: fngtpdgt58hm8fhjttacip1717i7ey4k X-HE-Tag: 1652529961-357540 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: randomize_page is an mm function. It is documented like one. It contains the history of one. It has the naming convention of one. It looks just like another very similar function in mm, randomize_stack_top(). And it has always been maintained and updated by mm people. There is no need for it to be in random.c. In the "which shape does not look like the other ones" test, pointing to randomize_page() is correct. So move randomize_page() into mm/util.c, right next to the similar randomize_stack_top() function. This commit contains no actual code changes. Cc: Andrew Morton Signed-off-by: Jason A. Donenfeld --- drivers/char/random.c | 32 -------------------------------- include/linux/mm.h | 1 + mm/util.c | 32 ++++++++++++++++++++++++++++++++ 3 files changed, 33 insertions(+), 32 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 6d8ccb200c5c..5738cab0079e 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -620,38 +620,6 @@ int __cold random_prepare_cpu(unsigned int cpu) } #endif -/** - * randomize_page - Generate a random, page aligned address - * @start: The smallest acceptable address the caller will take. - * @range: The size of the area, starting at @start, within which the - * random address must fall. - * - * If @start + @range would overflow, @range is capped. - * - * NOTE: Historical use of randomize_range, which this replaces, presumed that - * @start was already page aligned. We now align it regardless. - * - * Return: A page aligned address within [start, start + range). On error, - * @start is returned. - */ -unsigned long randomize_page(unsigned long start, unsigned long range) -{ - if (!PAGE_ALIGNED(start)) { - range -= PAGE_ALIGN(start) - start; - start = PAGE_ALIGN(start); - } - - if (start > ULONG_MAX - range) - range = ULONG_MAX - start; - - range >>= PAGE_SHIFT; - - if (range == 0) - return start; - - return start + (get_random_long() % range << PAGE_SHIFT); -} - /********************************************************************** * diff --git a/include/linux/mm.h b/include/linux/mm.h index 9f44254af8ce..b0183450e484 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2677,6 +2677,7 @@ extern int install_special_mapping(struct mm_struct *mm, unsigned long flags, struct page **pages); unsigned long randomize_stack_top(unsigned long stack_top); +unsigned long randomize_page(unsigned long start, unsigned long range); extern unsigned long get_unmapped_area(struct file *, unsigned long, unsigned long, unsigned long, unsigned long); diff --git a/mm/util.c b/mm/util.c index 3492a9e81aa3..ac63e5ca8b21 100644 --- a/mm/util.c +++ b/mm/util.c @@ -343,6 +343,38 @@ unsigned long randomize_stack_top(unsigned long stack_top) #endif } +/** + * randomize_page - Generate a random, page aligned address + * @start: The smallest acceptable address the caller will take. + * @range: The size of the area, starting at @start, within which the + * random address must fall. + * + * If @start + @range would overflow, @range is capped. + * + * NOTE: Historical use of randomize_range, which this replaces, presumed that + * @start was already page aligned. We now align it regardless. + * + * Return: A page aligned address within [start, start + range). On error, + * @start is returned. + */ +unsigned long randomize_page(unsigned long start, unsigned long range) +{ + if (!PAGE_ALIGNED(start)) { + range -= PAGE_ALIGN(start) - start; + start = PAGE_ALIGN(start); + } + + if (start > ULONG_MAX - range) + range = ULONG_MAX - start; + + range >>= PAGE_SHIFT; + + if (range == 0) + return start; + + return start + (get_random_long() % range << PAGE_SHIFT); +} + #ifdef CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT unsigned long arch_randomize_brk(struct mm_struct *mm) {