From patchwork Thu Mar 31 06:38:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Charan Teja Kalla X-Patchwork-Id: 12796774 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0001AC433EF for ; Thu, 31 Mar 2022 06:38:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3CC6A6B0072; Thu, 31 Mar 2022 02:38:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 37B106B0073; Thu, 31 Mar 2022 02:38:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 243F48D0001; Thu, 31 Mar 2022 02:38:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 16CD96B0072 for ; Thu, 31 Mar 2022 02:38:58 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id CFD3822686 for ; Thu, 31 Mar 2022 06:38:57 +0000 (UTC) X-FDA: 79303728714.06.458B8D5 Received: from alexa-out.qualcomm.com (alexa-out.qualcomm.com [129.46.98.28]) by imf12.hostedemail.com (Postfix) with ESMTP id 3EE594000B for ; Thu, 31 Mar 2022 06:38:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1648708737; x=1680244737; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=mHeEyGBnZF6p3AO3x4ap5fHK4uid7cBRJDPfLGdSF3o=; b=pmPRVjfEWGr1DIRhnfgLLoS2NURZMxEZylb6/xciEHwnzG9uwAypItyd fkFpJWiebaen7hci+rAy4d5/Hsqlyrzre59uR6a+ZfhJhAYupQRoT7hr/ BMXYKdcrWEVF5zj4Pr+ZkKlvpxLZM7UB6YfJC6vkEhMS1e4ynnJcyqZH9 w=; Received: from ironmsg07-lv.qualcomm.com ([10.47.202.151]) by alexa-out.qualcomm.com with ESMTP; 30 Mar 2022 23:38:56 -0700 X-QCInternal: smtphost Received: from nasanex01c.na.qualcomm.com ([10.47.97.222]) by ironmsg07-lv.qualcomm.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2022 23:38:56 -0700 Received: from nalasex01a.na.qualcomm.com (10.47.209.196) by nasanex01c.na.qualcomm.com (10.47.97.222) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 30 Mar 2022 23:38:55 -0700 Received: from hu-charante-hyd.qualcomm.com (10.80.80.8) by nalasex01a.na.qualcomm.com (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 30 Mar 2022 23:38:51 -0700 From: Charan Teja Kalla To: , , , , , , CC: , , Charan Teja Reddy Subject: [PATCH RESEND V5,1/2] mm: fadvise: move 'endbyte' calculations to helper function Date: Thu, 31 Mar 2022 12:08:20 +0530 Message-ID: X-Mailer: git-send-email 2.7.4 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nalasex01a.na.qualcomm.com (10.47.209.196) X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 3EE594000B X-Rspam-User: Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=quicinc.com header.s=qcdkim header.b=pmPRVjfE; dmarc=pass (policy=none) header.from=quicinc.com; spf=pass (imf12.hostedemail.com: domain of quic_charante@quicinc.com designates 129.46.98.28 as permitted sender) smtp.mailfrom=quic_charante@quicinc.com X-Stat-Signature: wwiszkdbwqkb8s4h7gswpshmphwzecir X-HE-Tag: 1648708737-83586 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Charan Teja Reddy Move the 'endbyte' calculations that determines last byte that fadvise can to a helper function. This is a preparatory change made for shmem_fadvise() functionality in the next patch. No functional changes in this patch. Signed-off-by: Charan Teja Reddy --- Changes in V5: -- Moved the 'endbyte' calculation to a helper function. -- This patch is newly raised in V5 thus no change exists from v1 to v4. mm/fadvise.c | 11 +---------- mm/internal.h | 21 +++++++++++++++++++++ 2 files changed, 22 insertions(+), 10 deletions(-) diff --git a/mm/fadvise.c b/mm/fadvise.c index 338f160..0c82be2 100644 --- a/mm/fadvise.c +++ b/mm/fadvise.c @@ -65,16 +65,7 @@ int generic_fadvise(struct file *file, loff_t offset, loff_t len, int advice) return 0; } - /* - * Careful about overflows. Len == 0 means "as much as possible". Use - * unsigned math because signed overflows are undefined and UBSan - * complains. - */ - endbyte = (u64)offset + (u64)len; - if (!len || endbyte < len) - endbyte = -1; - else - endbyte--; /* inclusive */ + endbyte = fadvise_calc_endbyte(offset, len); switch (advice) { case POSIX_FADV_NORMAL: diff --git a/mm/internal.h b/mm/internal.h index 58dc6ad..b02f07e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -546,6 +546,27 @@ static inline void vunmap_range_noflush(unsigned long start, unsigned long end) #endif /* !CONFIG_MMU */ /* + * Helper function to get the endbyte of a file that fadvise can operate on. + */ +static inline loff_t fadvise_calc_endbyte(loff_t offset, loff_t len) +{ + loff_t endbyte; + + /* + * Careful about overflows. Len == 0 means "as much as possible". Use + * unsigned math because signed overflows are undefined and UBSan + * complains. + */ + endbyte = (u64)offset + (u64)len; + if (!len || endbyte < len) + endbyte = -1; + else + endbyte--; /* inclusive */ + + return endbyte; +} + +/* * Return the mem_map entry representing the 'offset' subpage within * the maximally aligned gigantic page 'base'. Handle any discontiguity * in the mem_map at MAX_ORDER_NR_PAGES boundaries.