From patchwork Fri Apr 30 19:52:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michel Lespinasse X-Patchwork-Id: 12234201 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1ADF1C433B4 for ; Fri, 30 Apr 2021 19:53:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BD8BF615FF for ; Fri, 30 Apr 2021 19:53:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BD8BF615FF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=lespinasse.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5225A6B0093; Fri, 30 Apr 2021 15:52:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B9FFC6B009E; Fri, 30 Apr 2021 15:52:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D997C6B0074; Fri, 30 Apr 2021 15:52:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0108.hostedemail.com [216.40.44.108]) by kanga.kvack.org (Postfix) with ESMTP id 4A87D6B0098 for ; Fri, 30 Apr 2021 15:52:38 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 000FE180A7770 for ; Fri, 30 Apr 2021 19:52:37 +0000 (UTC) X-FDA: 78090080796.03.D691D45 Received: from server.lespinasse.org (server.lespinasse.org [63.205.204.226]) by imf04.hostedemail.com (Postfix) with ESMTP id 63A9C12E for ; Fri, 30 Apr 2021 19:52:33 +0000 (UTC) DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=lespinasse.org; i=@lespinasse.org; q=dns/txt; s=srv-14-ed; t=1619812353; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : from; bh=0Db61unRWfcOHO6d229/jewgQ0C/qszooUu2NMYgmHo=; b=wECn2pFk6yfclXbzl4xeFMOTpkfBKYUUEUA9L2CmfdZCgVHwuNdtOLagIcDKQWaDDliB3 adzhucR//qTZdnACA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=lespinasse.org; i=@lespinasse.org; q=dns/txt; s=srv-14-rsa; t=1619812353; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : from; bh=0Db61unRWfcOHO6d229/jewgQ0C/qszooUu2NMYgmHo=; b=p3VnJ3pqGKo0IRSSphAfvBns7M9kpQHPho8pBepwe2qRrsusie9ZtBcrtOJo3aZtUirya zWaUa1TwgHgqn3lvgFfpbynhYbmhBWuCRKftrOzNT34vRXgBKDvaU0f0WOVbEmu1TqB1sUT KhEd1OiSOVM5eTGcmumsE1RgAp5Afwty8iskFi0dCnmBD6oDlUDd1Kipa0HNgaNvtNkXQlo PW2EY2Pb2tVn7QYrOwamlD6R2BmF3zK9bbjpNIMoAaIRZKDDlqAjeBCUAhqUKVwhKfB6rv8 d7mHt6xB/Sc5RRvTztw8bY2eU5ysjleebk93JIvJV1pjCv3cXpO4DE1Dgc4w== Received: from zeus.lespinasse.org (zeus.lespinasse.org [10.0.0.150]) by server.lespinasse.org (Postfix) with ESMTPS id A332216035D; Fri, 30 Apr 2021 12:52:33 -0700 (PDT) Received: by zeus.lespinasse.org (Postfix, from userid 1000) id 9594C19F522; Fri, 30 Apr 2021 12:52:33 -0700 (PDT) From: Michel Lespinasse To: Linux-MM , Linux-Kernel Cc: Laurent Dufour , Peter Zijlstra , Michal Hocko , Matthew Wilcox , Rik van Riel , Paul McKenney , Andrew Morton , Suren Baghdasaryan , Joel Fernandes , Andy Lutomirski , Michel Lespinasse Subject: [PATCH 23/29] mm: implement speculative handling in do_swap_page() Date: Fri, 30 Apr 2021 12:52:24 -0700 Message-Id: <20210430195232.30491-24-michel@lespinasse.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210430195232.30491-1-michel@lespinasse.org> References: <20210430195232.30491-1-michel@lespinasse.org> MIME-Version: 1.0 Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=lespinasse.org header.s=srv-14-ed header.b=wECn2pFk; dkim=pass header.d=lespinasse.org header.s=srv-14-rsa header.b=p3VnJ3pq; dmarc=pass (policy=none) header.from=lespinasse.org; spf=pass (imf04.hostedemail.com: domain of walken@lespinasse.org designates 63.205.204.226 as permitted sender) smtp.mailfrom=walken@lespinasse.org X-Rspamd-Server: rspam03 X-Stat-Signature: ectycm63fg8bcfxwsmqb1kbc9pyodn4k X-Rspamd-Queue-Id: 63A9C12E Received-SPF: none (lespinasse.org>: No applicable sender policy available) receiver=imf04; identity=mailfrom; envelope-from=""; helo=server.lespinasse.org; client-ip=63.205.204.226 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1619812353-768884 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If the pte is larger than long, use pte_spinlock() to lock the page table when verifying the pte - pte_spinlock() is necessary to ensure the page table is still valid when we are locking it. Abort speculative faults if the pte is not a swap entry, or if the desired page is not found in swap cache, to keep things as simple as possible. Only use trylock when locking the swapped page - again to keep things simple, and also the usual lock_page_or_retry would otherwise try to release the mmap lock which is not held in the speculative case. Use pte_map_lock() to ensure proper synchronization when finally committing the faulted page to the mm address space. Signed-off-by: Michel Lespinasse --- mm/memory.c | 74 ++++++++++++++++++++++++++++++----------------------- 1 file changed, 42 insertions(+), 32 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index c3cd29d3acc6..a3708b4a616c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2654,30 +2654,6 @@ bool __pte_map_lock(struct vm_fault *vmf) #endif /* CONFIG_SPECULATIVE_PAGE_FAULT */ -/* - * handle_pte_fault chooses page fault handler according to an entry which was - * read non-atomically. Before making any commitment, on those architectures - * or configurations (e.g. i386 with PAE) which might give a mix of unmatched - * parts, do_swap_page must check under lock before unmapping the pte and - * proceeding (but do_wp_page is only called after already making such a check; - * and do_anonymous_page can safely check later on). - */ -static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd, - pte_t *page_table, pte_t orig_pte) -{ - int same = 1; -#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPTION) - if (sizeof(pte_t) > sizeof(unsigned long)) { - spinlock_t *ptl = pte_lockptr(mm, pmd); - spin_lock(ptl); - same = pte_same(*page_table, orig_pte); - spin_unlock(ptl); - } -#endif - pte_unmap(page_table); - return same; -} - static inline bool cow_user_page(struct page *dst, struct page *src, struct vm_fault *vmf) { @@ -3386,12 +3362,34 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) return VM_FAULT_RETRY; } - if (!pte_unmap_same(vma->vm_mm, vmf->pmd, vmf->pte, vmf->orig_pte)) - goto out; +#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPTION) + if (sizeof(pte_t) > sizeof(unsigned long)) { + /* + * vmf->orig_pte was read non-atomically. Before making + * any commitment, on those architectures or configurations + * (e.g. i386 with PAE) which might give a mix of + * unmatched parts, we must check under lock before + * unmapping the pte and proceeding. + * + * (but do_wp_page is only called after already making + * such a check; and do_anonymous_page can safely + * check later on). + */ + if (!pte_spinlock(vmf)) + return VM_FAULT_RETRY; + if (!pte_same(*vmf->pte, vmf->orig_pte)) + goto unlock; + spin_unlock(vmf->ptl); + } +#endif + pte_unmap(vmf->pte); + vmf->pte = NULL; entry = pte_to_swp_entry(vmf->orig_pte); if (unlikely(non_swap_entry(entry))) { - if (is_migration_entry(entry)) { + if (vmf->flags & FAULT_FLAG_SPECULATIVE) { + ret = VM_FAULT_RETRY; + } else if (is_migration_entry(entry)) { migration_entry_wait(vma->vm_mm, vmf->pmd, vmf->address); } else if (is_device_private_entry(entry)) { @@ -3412,8 +3410,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) swapcache = page; if (!page) { - struct swap_info_struct *si = swp_swap_info(entry); + struct swap_info_struct *si; + if (vmf->flags & FAULT_FLAG_SPECULATIVE) { + delayacct_clear_flag(DELAYACCT_PF_SWAPIN); + return VM_FAULT_RETRY; + } + + si = swp_swap_info(entry); if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) { /* skip swapcache */ @@ -3476,7 +3480,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) goto out_release; } - locked = lock_page_or_retry(page, vma->vm_mm, vmf->flags); + if (vmf->flags & FAULT_FLAG_SPECULATIVE) + locked = trylock_page(page); + else + locked = lock_page_or_retry(page, vma->vm_mm, vmf->flags); delayacct_clear_flag(DELAYACCT_PF_SWAPIN); if (!locked) { @@ -3504,10 +3511,13 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) cgroup_throttle_swaprate(page, GFP_KERNEL); /* - * Back out if somebody else already faulted in this pte. + * Back out if the VMA has changed in our back during a speculative + * page fault or if somebody else already faulted in this pte. */ - vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, - &vmf->ptl); + if (!pte_map_lock(vmf)) { + ret = VM_FAULT_RETRY; + goto out_page; + } if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte))) goto out_nomap;