From patchwork Thu Jul 15 03:35:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12378471 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 545E4C47E4B for ; Thu, 15 Jul 2021 03:53:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F103C6102A for ; Thu, 15 Jul 2021 03:53:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F103C6102A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 43EF88D0020; Wed, 14 Jul 2021 23:53:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4159E8D000A; Wed, 14 Jul 2021 23:53:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2B7DF8D0020; Wed, 14 Jul 2021 23:53:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0033.hostedemail.com [216.40.44.33]) by kanga.kvack.org (Postfix) with ESMTP id 086F48D000A for ; Wed, 14 Jul 2021 23:53:25 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id E045B8248047 for ; Thu, 15 Jul 2021 03:53:23 +0000 (UTC) X-FDA: 78363452286.14.4BDE514 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf28.hostedemail.com (Postfix) with ESMTP id 9DA3E90000A5 for ; Thu, 15 Jul 2021 03:53:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=jaOH2K1i0UCcCHDJat4DXALtWZjmQx4kJWboCw70cWs=; b=lt4vCVlw0S7uuQ78g/lAxdVJaf Wov74cv3q+YJqfUI+kcnldSH8RSeXI5Ybh035+BMDSixxX8ffPuM9nvjtOA7bBPQ02ZbxS6FUnW9G tO/QlWszNc+JDETEJlwv3egWWe9qI86P3EKuIwhoc0iNLm+9LdL3GmmFAsTg/2Rm4C7a5kCq3udlY LAsoF9l3xaXZyFABRVdTPQ4yf3OaewLoZuLwRL2lbxwLBkoWGn5FuSoF+81i2DS0QIoAYLHe8quk2 wjwXmTgjhp05h+gWP7hMagN7fr6uY/9eibqm0mh/Uc9WUXY3QoCnfzC4dOOBXA+2kkiz2wDd8+w8q vDdaGwIg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sOn-002v7V-If; Thu, 15 Jul 2021 03:51:36 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig , Jeff Layton , "Kirill A . Shutemov" , Vlastimil Babka , William Kucharski , David Howells Subject: [PATCH v14 018/138] mm/filemap: Add folio_lock() Date: Thu, 15 Jul 2021 04:35:04 +0100 Message-Id: <20210715033704.692967-19-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=lt4vCVlw; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 9DA3E90000A5 X-Stat-Signature: any71fz6by6tgospudpgwfopzn9gqek4 X-HE-Tag: 1626321203-902349 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is like lock_page() but for use by callers who know they have a folio. Convert __lock_page() to be __folio_lock(). This saves one call to compound_head() per contended call to lock_page(). Saves 455 bytes of text; mostly from improved register allocation and inlining decisions. __folio_lock is 59 bytes while __lock_page was 79. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Jeff Layton Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: David Howells Acked-by: Mike Rapoport --- include/linux/pagemap.h | 24 +++++++++++++++++++----- mm/filemap.c | 29 +++++++++++++++-------------- 2 files changed, 34 insertions(+), 19 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index a13edc7a2916..c3673c55125b 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -653,7 +653,7 @@ static inline bool wake_page_match(struct wait_page_queue *wait_page, return true; } -extern void __lock_page(struct page *page); +void __folio_lock(struct folio *folio); extern int __lock_page_killable(struct page *page); extern int __lock_page_async(struct page *page, struct wait_page_queue *wait); extern int __lock_page_or_retry(struct page *page, struct mm_struct *mm, @@ -661,13 +661,24 @@ extern int __lock_page_or_retry(struct page *page, struct mm_struct *mm, void unlock_page(struct page *page); void folio_unlock(struct folio *folio); +static inline bool folio_trylock(struct folio *folio) +{ + return likely(!test_and_set_bit_lock(PG_locked, folio_flags(folio, 0))); +} + /* * Return true if the page was successfully locked */ static inline int trylock_page(struct page *page) { - page = compound_head(page); - return (likely(!test_and_set_bit_lock(PG_locked, &page->flags))); + return folio_trylock(page_folio(page)); +} + +static inline void folio_lock(struct folio *folio) +{ + might_sleep(); + if (!folio_trylock(folio)) + __folio_lock(folio); } /* @@ -675,9 +686,12 @@ static inline int trylock_page(struct page *page) */ static inline void lock_page(struct page *page) { + struct folio *folio; might_sleep(); - if (!trylock_page(page)) - __lock_page(page); + + folio = page_folio(page); + if (!folio_trylock(folio)) + __folio_lock(folio); } /* diff --git a/mm/filemap.c b/mm/filemap.c index 1af67ef94e4c..95f89656f126 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1187,7 +1187,7 @@ static void wake_up_page(struct page *page, int bit) */ enum behavior { EXCLUSIVE, /* Hold ref to page and take the bit when woken, like - * __lock_page() waiting on then setting PG_locked. + * __folio_lock() waiting on then setting PG_locked. */ SHARED, /* Hold ref to page and check the bit when woken, like * wait_on_page_writeback() waiting on PG_writeback. @@ -1578,17 +1578,16 @@ void page_endio(struct page *page, bool is_write, int err) EXPORT_SYMBOL_GPL(page_endio); /** - * __lock_page - get a lock on the page, assuming we need to sleep to get it - * @__page: the page to lock + * __folio_lock - Get a lock on the folio, assuming we need to sleep to get it. + * @folio: The folio to lock */ -void __lock_page(struct page *__page) +void __folio_lock(struct folio *folio) { - struct page *page = compound_head(__page); - wait_queue_head_t *q = page_waitqueue(page); - wait_on_page_bit_common(q, page, PG_locked, TASK_UNINTERRUPTIBLE, + wait_queue_head_t *q = page_waitqueue(&folio->page); + wait_on_page_bit_common(q, &folio->page, PG_locked, TASK_UNINTERRUPTIBLE, EXCLUSIVE); } -EXPORT_SYMBOL(__lock_page); +EXPORT_SYMBOL(__folio_lock); int __lock_page_killable(struct page *__page) { @@ -1663,10 +1662,10 @@ int __lock_page_or_retry(struct page *page, struct mm_struct *mm, return 0; } } else { - __lock_page(page); + __folio_lock(page_folio(page)); } - return 1; + return 1; } /** @@ -2837,7 +2836,9 @@ loff_t mapping_seek_hole_data(struct address_space *mapping, loff_t start, static int lock_page_maybe_drop_mmap(struct vm_fault *vmf, struct page *page, struct file **fpin) { - if (trylock_page(page)) + struct folio *folio = page_folio(page); + + if (folio_trylock(folio)) return 1; /* @@ -2850,7 +2851,7 @@ static int lock_page_maybe_drop_mmap(struct vm_fault *vmf, struct page *page, *fpin = maybe_unlock_mmap_for_io(vmf, *fpin); if (vmf->flags & FAULT_FLAG_KILLABLE) { - if (__lock_page_killable(page)) { + if (__lock_page_killable(&folio->page)) { /* * We didn't have the right flags to drop the mmap_lock, * but all fault_handlers only check for fatal signals @@ -2862,11 +2863,11 @@ static int lock_page_maybe_drop_mmap(struct vm_fault *vmf, struct page *page, return 0; } } else - __lock_page(page); + __folio_lock(folio); + return 1; } - /* * Synchronous readahead happens when we don't even find a page in the page * cache at all. We don't want to perform IO under the mmap sem, so if we have