From patchwork Fri Apr 14 13:02:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13211477 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 981F4C77B6E for ; Fri, 14 Apr 2023 13:04:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=+GWRZuV5HqowrpHnoaXNXdv2JXxy1sD+AnIVAy1GkXs=; b=0ITDDvla+6kwME jL6ECcvOBQ9C/E93kAH1B4U1RKLfi/ITwn9Ay1S2670O83HAy/fj9sXjpYyKWnwuWhogYMMP4GBEf DsrbayeDsADZdYiLftdi9rxMJlMSLoxGvMn00SjeClQ/XtewqM8ARkatWtI68tXsqZjgSi3oklF/U 7+vUDRa8Pj1AVJdWEu2qfnNR26P62RNmUWhxFr5n999TjMYveLE6XJGdbV6eklGc3XYWdF9w5D9t+ vf/8WAEkb9sBBsuXKf2DtI1u0VFyeejxNAo6rBdpcgZ7BwlIv4DiHHpu79GK0moqeKFGICK7d3tld cLdKGXN2X1+1+h+KoJow==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pnJ5r-009bfo-1C; Fri, 14 Apr 2023 13:03:59 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pnJ5P-009bRi-1u for linux-arm-kernel@lists.infradead.org; Fri, 14 Apr 2023 13:03:38 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7A1EA1763; Fri, 14 Apr 2023 06:04:14 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0171B3F6C4; Fri, 14 Apr 2023 06:03:28 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , "Matthew Wilcox (Oracle)" , Yu Zhao , "Yin, Fengwei" Cc: Ryan Roberts , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Subject: [RFC v2 PATCH 09/17] mm: Update wp_page_reuse() to operate on range of pages Date: Fri, 14 Apr 2023 14:02:55 +0100 Message-Id: <20230414130303.2345383-10-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230414130303.2345383-1-ryan.roberts@arm.com> References: <20230414130303.2345383-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230414_060331_737686_702A6038 X-CRM114-Status: GOOD ( 18.03 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We will shortly be updating do_wp_page() to be able to reuse a range of pages from a large anon folio. As an enabling step, modify wp_page_reuse() to operate on a range of pages, if a struct anon_folio_range is passed in. Batching in this way allows us to batch up the cache maintenance and event counting for small performance improvements. Currently all callsites pass range=NULL, so no functional changes intended. Signed-off-by: Ryan Roberts --- mm/memory.c | 80 +++++++++++++++++++++++++++++++++++++++-------------- 1 file changed, 60 insertions(+), 20 deletions(-) -- 2.25.1 diff --git a/mm/memory.c b/mm/memory.c index f92a28064596..83835ff5a818 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3030,6 +3030,14 @@ static inline int max_anon_folio_order(struct vm_area_struct *vma) return ANON_FOLIO_ORDER_MAX; } +struct anon_folio_range { + unsigned long va_start; + pte_t *pte_start; + struct page *pg_start; + int nr; + bool exclusive; +}; + /* * Returns index of first pte that is not none, or nr if all are none. */ @@ -3122,31 +3130,63 @@ static int calc_anon_folio_order_alloc(struct vm_fault *vmf, int order) * case, all we need to do here is to mark the page as writable and update * any related book-keeping. */ -static inline void wp_page_reuse(struct vm_fault *vmf) +static inline void wp_page_reuse(struct vm_fault *vmf, + struct anon_folio_range *range) __releases(vmf->ptl) { struct vm_area_struct *vma = vmf->vma; - struct page *page = vmf->page; + unsigned long addr; + pte_t *pte; + struct page *page; + int nr; pte_t entry; + int change = 0; + int i; VM_BUG_ON(!(vmf->flags & FAULT_FLAG_WRITE)); - VM_BUG_ON(page && PageAnon(page) && !PageAnonExclusive(page)); - /* - * Clear the pages cpupid information as the existing - * information potentially belongs to a now completely - * unrelated process. - */ - if (page) - page_cpupid_xchg_last(page, (1 << LAST_CPUPID_SHIFT) - 1); + if (range) { + addr = range->va_start; + pte = range->pte_start; + page = range->pg_start; + nr = range->nr; + } else { + addr = vmf->address; + pte = vmf->pte; + page = vmf->page; + nr = 1; + } + + if (page) { + for (i = 0; i < nr; i++, page++) { + VM_BUG_ON(PageAnon(page) && !PageAnonExclusive(page)); + + /* + * Clear the pages cpupid information as the existing + * information potentially belongs to a now completely + * unrelated process. + */ + page_cpupid_xchg_last(page, + (1 << LAST_CPUPID_SHIFT) - 1); + } + } + + flush_cache_range(vma, addr, addr + (nr << PAGE_SHIFT)); + + for (i = 0; i < nr; i++) { + entry = pte_mkyoung(pte[i]); + entry = maybe_mkwrite(pte_mkdirty(entry), vma); + change |= ptep_set_access_flags(vma, + addr + (i << PAGE_SHIFT), + pte + i, + entry, 1); + } + + if (change) + update_mmu_cache_range(vma, addr, pte, nr); - flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte)); - entry = pte_mkyoung(vmf->orig_pte); - entry = maybe_mkwrite(pte_mkdirty(entry), vma); - if (ptep_set_access_flags(vma, vmf->address, vmf->pte, entry, 1)) - update_mmu_cache(vma, vmf->address, vmf->pte); pte_unmap_unlock(vmf->pte, vmf->ptl); - count_vm_event(PGREUSE); + count_vm_events(PGREUSE, nr); } /* @@ -3359,7 +3399,7 @@ vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf) pte_unmap_unlock(vmf->pte, vmf->ptl); return VM_FAULT_NOPAGE; } - wp_page_reuse(vmf); + wp_page_reuse(vmf, NULL); return 0; } @@ -3381,7 +3421,7 @@ static vm_fault_t wp_pfn_shared(struct vm_fault *vmf) return ret; return finish_mkwrite_fault(vmf); } - wp_page_reuse(vmf); + wp_page_reuse(vmf, NULL); return 0; } @@ -3410,7 +3450,7 @@ static vm_fault_t wp_page_shared(struct vm_fault *vmf) return tmp; } } else { - wp_page_reuse(vmf); + wp_page_reuse(vmf, NULL); lock_page(vmf->page); } ret |= fault_dirty_shared_page(vmf); @@ -3534,7 +3574,7 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) pte_unmap_unlock(vmf->pte, vmf->ptl); return 0; } - wp_page_reuse(vmf); + wp_page_reuse(vmf, NULL); return 0; } copy: