From patchwork Thu May 7 07:50:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen (ThunderTown)" X-Patchwork-Id: 11532741 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7533B14B4 for ; Thu, 7 May 2020 07:56:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4BBCD2083B for ; Thu, 7 May 2020 07:56:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4BBCD2083B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7725C900009; Thu, 7 May 2020 03:56:06 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6FA9790000B; Thu, 7 May 2020 03:56:06 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 59B50900009; Thu, 7 May 2020 03:56:06 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0220.hostedemail.com [216.40.44.220]) by kanga.kvack.org (Postfix) with ESMTP id 3A6F290000B for ; Thu, 7 May 2020 03:56:06 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id EC9788248D51 for ; Thu, 7 May 2020 07:56:05 +0000 (UTC) X-FDA: 76789164690.03.grain47_1807922939456 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,thunder.leizhen@huawei.com,,RULES_HIT:30012:30025:30046:30054,0,RBL:45.249.212.32:@huawei.com:.lbl8.mailshell.net-64.95.201.95 62.18.2.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: grain47_1807922939456 X-Filterd-Recvd-Size: 5819 Received: from huawei.com (szxga06-in.huawei.com [45.249.212.32]) by imf24.hostedemail.com (Postfix) with ESMTP for ; Thu, 7 May 2020 07:56:05 +0000 (UTC) Received: from DGGEMS406-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 95A0B72A110D85664A36; Thu, 7 May 2020 15:55:58 +0800 (CST) Received: from DESKTOP-C3MD9UG.china.huawei.com (10.166.215.55) by DGGEMS406-HUB.china.huawei.com (10.3.19.206) with Microsoft SMTP Server id 14.3.487.0; Thu, 7 May 2020 15:55:51 +0800 From: Zhen Lei To: Minchan Kim , Nitin Gupta , "Sergey Senozhatsky" , Matthew Wilcox , Jens Axboe , Coly Li , Kent Overstreet , "Alasdair Kergon" , Mike Snitzer , linux-block , Andrew Morton , linux-mm , dm-devel , Song Liu , linux-raid , linux-kernel CC: Zhen Lei Subject: [PATCH v2 08/10] md: use sectors_to_npage() and npage_to_sectors() to clean up code Date: Thu, 7 May 2020 15:50:58 +0800 Message-ID: <20200507075100.1779-9-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.26.0.windows.1 In-Reply-To: <20200507075100.1779-1-thunder.leizhen@huawei.com> References: <20200507075100.1779-1-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.166.215.55] X-CFilter-Loop: Reflected X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: 1. Replace ">> (PAGE_SHIFT - 9)" with sectors_to_npage() 2. Replace "<< (PAGE_SHIFT - 9)" with npage_to_sectors() Suggested-by: Matthew Wilcox Signed-off-by: Zhen Lei --- drivers/md/dm-table.c | 2 +- drivers/md/raid1.c | 2 +- drivers/md/raid10.c | 2 +- drivers/md/raid5-cache.c | 11 +++++------ 4 files changed, 8 insertions(+), 9 deletions(-) diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c index 0a2cc197f62b..e1f176bda528 100644 --- a/drivers/md/dm-table.c +++ b/drivers/md/dm-table.c @@ -1964,7 +1964,7 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, #endif /* Allow reads to exceed readahead limits */ - q->backing_dev_info->io_pages = limits->max_sectors >> (PAGE_SHIFT - 9); + q->backing_dev_info->io_pages = sectors_to_npage(limits->max_sectors); } unsigned int dm_table_get_num_targets(struct dm_table *t) diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index cd810e195086..44ffe1b6d77a 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -2129,7 +2129,7 @@ static void process_checks(struct r1bio *r1_bio) int vcnt; /* Fix variable parts of all bios */ - vcnt = (r1_bio->sectors + PAGE_SIZE / 512 - 1) >> (PAGE_SHIFT - 9); + vcnt = sectors_to_npage(r1_bio->sectors + PAGE_SIZE / 512 - 1); for (i = 0; i < conf->raid_disks * 2; i++) { blk_status_t status; struct bio *b = r1_bio->bios[i]; diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index ec136e44aef7..948afe720fca 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -2029,7 +2029,7 @@ static void sync_request_write(struct mddev *mddev, struct r10bio *r10_bio) fbio->bi_iter.bi_idx = 0; fpages = get_resync_pages(fbio)->pages; - vcnt = (r10_bio->sectors + (PAGE_SIZE >> 9) - 1) >> (PAGE_SHIFT - 9); + vcnt = sectors_to_npage(r10_bio->sectors + (PAGE_SIZE >> 9) - 1); /* now find blocks with errors */ for (i=0 ; i < conf->copies ; i++) { int j, d; diff --git a/drivers/md/raid5-cache.c b/drivers/md/raid5-cache.c index 9b6da759dca2..0b9cd810466a 100644 --- a/drivers/md/raid5-cache.c +++ b/drivers/md/raid5-cache.c @@ -832,8 +832,7 @@ static void r5l_append_payload_meta(struct r5l_log *log, u16 type, payload = page_address(io->meta_page) + io->meta_offset; payload->header.type = cpu_to_le16(type); payload->header.flags = cpu_to_le16(0); - payload->size = cpu_to_le32((1 + !!checksum2_valid) << - (PAGE_SHIFT - 9)); + payload->size = cpu_to_le32(npage_to_sectors(1 + !!checksum2_valid)); payload->location = cpu_to_le64(location); payload->checksum[0] = cpu_to_le32(checksum1); if (checksum2_valid) @@ -1042,7 +1041,7 @@ int r5l_write_stripe(struct r5l_log *log, struct stripe_head *sh) mutex_lock(&log->io_mutex); /* meta + data */ - reserve = (1 + write_disks) << (PAGE_SHIFT - 9); + reserve = npage_to_sectors(1 + write_disks); if (log->r5c_journal_mode == R5C_JOURNAL_MODE_WRITE_THROUGH) { if (!r5l_has_free_space(log, reserve)) { @@ -2053,7 +2052,7 @@ r5l_recovery_verify_data_checksum_for_mb(struct r5l_log *log, le32_to_cpu(payload->size)); mb_offset += sizeof(struct r5l_payload_data_parity) + sizeof(__le32) * - (le32_to_cpu(payload->size) >> (PAGE_SHIFT - 9)); + sectors_to_npage(le32_to_cpu(payload->size)); } } @@ -2199,7 +2198,7 @@ r5c_recovery_analyze_meta_block(struct r5l_log *log, mb_offset += sizeof(struct r5l_payload_data_parity) + sizeof(__le32) * - (le32_to_cpu(payload->size) >> (PAGE_SHIFT - 9)); + sectors_to_npage(le32_to_cpu(payload->size)); } return 0; @@ -2916,7 +2915,7 @@ int r5c_cache_data(struct r5l_log *log, struct stripe_head *sh) mutex_lock(&log->io_mutex); /* meta + data */ - reserve = (1 + pages) << (PAGE_SHIFT - 9); + reserve = npage_to_sectors(1 + pages); if (test_bit(R5C_LOG_CRITICAL, &conf->cache_state) && sh->log_start == MaxSector)