From patchwork Mon Jul 19 12:32:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Simmons X-Patchwork-Id: 12385765 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 748DBC07E9B for ; Mon, 19 Jul 2021 12:32:45 +0000 (UTC) Received: from pdx1-mailman02.dreamhost.com (pdx1-mailman02.dreamhost.com [64.90.62.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 31A1F6112D for ; Mon, 19 Jul 2021 12:32:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 31A1F6112D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=lustre-devel-bounces@lists.lustre.org Received: from pdx1-mailman02.dreamhost.com (localhost [IPv6:::1]) by pdx1-mailman02.dreamhost.com (Postfix) with ESMTP id 2BA6234FA4F; Mon, 19 Jul 2021 05:32:31 -0700 (PDT) Received: from smtp3.ccs.ornl.gov (smtp3.ccs.ornl.gov [160.91.203.39]) by pdx1-mailman02.dreamhost.com (Postfix) with ESMTP id D7C3C34F962 for ; Mon, 19 Jul 2021 05:32:19 -0700 (PDT) Received: from star.ccs.ornl.gov (star.ccs.ornl.gov [160.91.202.134]) by smtp3.ccs.ornl.gov (Postfix) with ESMTP id A812D6C1; Mon, 19 Jul 2021 08:32:15 -0400 (EDT) Received: by star.ccs.ornl.gov (Postfix, from userid 2004) id 9E818BD1C6; Mon, 19 Jul 2021 08:32:15 -0400 (EDT) From: James Simmons To: Andreas Dilger , Oleg Drokin , NeilBrown Date: Mon, 19 Jul 2021 08:32:05 -0400 Message-Id: <1626697933-6971-11-git-send-email-jsimmons@infradead.org> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1626697933-6971-1-git-send-email-jsimmons@infradead.org> References: <1626697933-6971-1-git-send-email-jsimmons@infradead.org> Subject: [lustre-devel] [PATCH 10/18] lustre: readahead: fix to reserve min pages X-BeenThere: lustre-devel@lists.lustre.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: "For discussing Lustre software development." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Wang Shilong , Lustre Development List MIME-Version: 1.0 Errors-To: lustre-devel-bounces@lists.lustre.org Sender: "lustre-devel" From: Wang Shilong @pages_min might be larger than @pages which indicate more pages should be read, and it will cause a warning later. WC-bug-id: https://jira.whamcloud.com/browse/LU-14778 Lustre-commit: 4fc127428f00d6a3 ("LU-14778 readahead: fix to reserve min pages") Signed-off-by: Wang Shilong Reviewed-on: https://review.whamcloud.com/44050 Reviewed-by: Andreas Dilger Reviewed-by: Bobi Jam Reviewed-by: Oleg Drokin Signed-off-by: James Simmons --- fs/lustre/llite/rw.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/fs/lustre/llite/rw.c b/fs/lustre/llite/rw.c index 184e5e8..4de77f6 100644 --- a/fs/lustre/llite/rw.c +++ b/fs/lustre/llite/rw.c @@ -85,8 +85,9 @@ static unsigned long ll_ra_count_get(struct ll_sb_info *sbi, struct ll_ra_info *ra = &sbi->ll_ra_info; long ret; + WARN_ON_ONCE(pages_min > pages); /** - * Don't try readahead agreesively if we are limited + * Don't try readahead aggresively if we are limited * LRU pages, otherwise, it could cause deadlock. */ pages = min(sbi->ll_cache->ccc_lru_max >> 2, pages); @@ -95,7 +96,7 @@ static unsigned long ll_ra_count_get(struct ll_sb_info *sbi, * this will make us leak @ra_cur_pages, because * ll_ra_count_put() acutally freed @pages. */ - if (WARN_ON_ONCE(pages_min > pages)) + if (unlikely(pages_min > pages)) pages_min = pages; /* @@ -829,7 +830,8 @@ static int ll_readahead(const struct lu_env *env, struct cl_io *io, /* don't over reserved for mmap range read */ if (skip_index) pages_min = 0; - + if (pages_min > pages) + pages = pages_min; ria->ria_reserved = ll_ra_count_get(ll_i2sbi(inode), ria, pages, pages_min); if (ria->ria_reserved < pages)