From patchwork Wed Nov 10 08:40:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 12611597 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E071C433FE for ; Wed, 10 Nov 2021 08:41:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 29A9C61248 for ; Wed, 10 Nov 2021 08:41:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 29A9C61248 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id BC4826B0074; Wed, 10 Nov 2021 03:41:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B731C6B0075; Wed, 10 Nov 2021 03:41:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A141D6B007B; Wed, 10 Nov 2021 03:41:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0060.hostedemail.com [216.40.44.60]) by kanga.kvack.org (Postfix) with ESMTP id 9391E6B0074 for ; Wed, 10 Nov 2021 03:41:53 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 575C17D8C6 for ; Wed, 10 Nov 2021 08:41:53 +0000 (UTC) X-FDA: 78792377706.07.1B5C7B3 Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) by imf31.hostedemail.com (Postfix) with ESMTP id 7E1FE104EECC for ; Wed, 10 Nov 2021 08:41:40 +0000 (UTC) Received: by mail-pl1-f176.google.com with SMTP id b11so2349651pld.12 for ; Wed, 10 Nov 2021 00:41:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Zr0rEdADAEaJn1N2dbntb6SW/6iRFYHNNestGTjHAUw=; b=ubGR+OzFQ1TuYh0c6CA+SlJu3HO0Uah7CnGKzJCfJu6uWyLALl57f74EMNw9FiVeff CECiPlLygsN1QKh3rW+QCRe3UWyfzSQnk6/ek9aO+rJbhaurLSzt87t7wit714RzlXTe TVTZFUEn6AoWADenerWODZseHbsWy26B26IGj1PneiDRSmVfPNjHF4XlW7A2zZMQd9yQ c6K1Vyo0SOdS2IhZo1JcIlyaIQq15WH978qktCMNP4oaOM7sn+0fYLtLEJGFvUwWafnI O2wYFfuhal6E36gjjKFWMUzwu8VnPQgb7kccubvTpMpjEMUBOn4vA3eh07VpmOe0FGB1 OZJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Zr0rEdADAEaJn1N2dbntb6SW/6iRFYHNNestGTjHAUw=; b=WlnwvGzufoGs/PuLv9ih0TnmRr/g3naNKXbv2H3luTuqlFt4PlOyJHG61SY4yBXXAK NIJ8DnfcR/LHR5PESab5VXI9IKKJxxI6rxSh+6aYBoOux0iIJqw7ERZnfFfC0FAyTy0T HqN48tr2EhJ1sjijD24GcjEcpALNU6skv7i0hyNiDKeE1mJgAo/WERlRdKsm6cXh4PXp pEhcHd8yrQSOloCXM5ckHvCOMxX3W4Qoz5TZ9w3s2HYhZMkyAPGfqErQyF5F2i4u1M7i R4rzgFk5QvvnMPFCo1jzsyWyhNnmxQBrFRSWK2YZMgVZPYgVQ7ukmvpou6OGJoM6VwKf Nq2A== X-Gm-Message-State: AOAM533NsG7h2PNaYzoxx/IpK2OMZN5Ce3oyfDz7MKpO2RJnU73Uz5SE zQf90SbP6Q6w9TCBQe8Yu7mmfA== X-Google-Smtp-Source: ABdhPJzko2PpH9xf7ZWwbrHIa+6g5wVf4MnCYY4L1qk7EoSuHUGvbcCC2UzIffZfGim9pxrN9eZawA== X-Received: by 2002:a17:90a:6a82:: with SMTP id u2mr15066090pjj.105.1636533712190; Wed, 10 Nov 2021 00:41:52 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([139.177.225.251]) by smtp.gmail.com with ESMTPSA id v38sm5485368pgl.38.2021.11.10.00.41.46 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 10 Nov 2021 00:41:51 -0800 (PST) From: Qi Zheng To: akpm@linux-foundation.org, tglx@linutronix.de, kirill.shutemov@linux.intel.com, mika.penttila@nextfour.com, david@redhat.com, jgg@nvidia.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, songmuchun@bytedance.com, zhouchengming@bytedance.com, Qi Zheng Subject: [PATCH v3 04/15] mm: rework the parameter of lock_page_or_retry() Date: Wed, 10 Nov 2021 16:40:46 +0800 Message-Id: <20211110084057.27676-5-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20211110084057.27676-1-zhengqi.arch@bytedance.com> References: <20211110084057.27676-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 7E1FE104EECC X-Stat-Signature: tto737ow84pah9wgb5ugtkakn384cstd Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=ubGR+OzF; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf31.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.214.176 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com X-HE-Tag: 1636533700-465498 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We need the vmf in lock_page_or_retry() in the subsequent patch, so pass in it directly. Signed-off-by: Qi Zheng --- include/linux/pagemap.h | 8 +++----- mm/filemap.c | 6 ++++-- mm/memory.c | 4 ++-- 3 files changed, 9 insertions(+), 9 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 6a30916b76e5..94f9547b4411 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -709,8 +709,7 @@ static inline bool wake_page_match(struct wait_page_queue *wait_page, void __folio_lock(struct folio *folio); int __folio_lock_killable(struct folio *folio); -bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm, - unsigned int flags); +bool __folio_lock_or_retry(struct folio *folio, struct vm_fault *vmf); void unlock_page(struct page *page); void folio_unlock(struct folio *folio); @@ -772,14 +771,13 @@ static inline int lock_page_killable(struct page *page) * Return value and mmap_lock implications depend on flags; see * __folio_lock_or_retry(). */ -static inline bool lock_page_or_retry(struct page *page, struct mm_struct *mm, - unsigned int flags) +static inline bool lock_page_or_retry(struct page *page, struct vm_fault *vmf) { struct folio *folio; might_sleep(); folio = page_folio(page); - return folio_trylock(folio) || __folio_lock_or_retry(folio, mm, flags); + return folio_trylock(folio) || __folio_lock_or_retry(folio, vmf); } /* diff --git a/mm/filemap.c b/mm/filemap.c index 07c654202870..ff8d19b7ce1d 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1695,9 +1695,11 @@ static int __folio_lock_async(struct folio *folio, struct wait_page_queue *wait) * If neither ALLOW_RETRY nor KILLABLE are set, will always return true * with the folio locked and the mmap_lock unperturbed. */ -bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm, - unsigned int flags) +bool __folio_lock_or_retry(struct folio *folio, struct vm_fault *vmf) { + unsigned int flags = vmf->flags; + struct mm_struct *mm = vmf->vma->vm_mm; + if (fault_flag_allow_retry_first(flags)) { /* * CAUTION! In this case, mmap_lock is not released diff --git a/mm/memory.c b/mm/memory.c index b00cd60fc368..bec6a5d5ee7c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3443,7 +3443,7 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf) struct vm_area_struct *vma = vmf->vma; struct mmu_notifier_range range; - if (!lock_page_or_retry(page, vma->vm_mm, vmf->flags)) + if (!lock_page_or_retry(page, vmf)) return VM_FAULT_RETRY; mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0, vma, vma->vm_mm, vmf->address & PAGE_MASK, @@ -3576,7 +3576,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) goto out_release; } - locked = lock_page_or_retry(page, vma->vm_mm, vmf->flags); + locked = lock_page_or_retry(page, vmf); delayacct_clear_flag(current, DELAYACCT_PF_SWAPIN); if (!locked) {