From patchwork Fri Mar 17 10:57:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13178888 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EB645C74A5B for ; Fri, 17 Mar 2023 10:59:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Fae/ekIXZ9AKoKbt9onZ2q/sZ4NsY9+Tmyup07hMPfU=; b=a4Uk+231RbJujJ +dzjsSZZLVMc1n5EHlBSehAyTxZOiJZ8dcDYD+FdL5B15EufVn6ukgLwoK/QdU7nP4/qgFRtTDhLG 1ygszqzPj9VZfgd9+Go7paMUvxNfGn1tfGHOBAj2jLHu/WDPK/XdhwNCN0Ofn3K5Q4am6X2pI6ELX MBDILtpJGPtpisDGRyUzb+2onpgeK8JjgmuxwSFAk14Hbd1bkInBq81oQelSMVb99dLl4DAhD2ycZ 566skPDNYLtT6d3imtl+L9SspB0NfnVkJHlyyOpkshNzsi1BeyxGNooVFtJ0VqeHVUE6vXT4H0whz /m5Yo5kN3TFTOo9U1/6A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pd7n8-001yAg-1v; Fri, 17 Mar 2023 10:58:34 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pd7n0-001y4G-1S for linux-arm-kernel@lists.infradead.org; Fri, 17 Mar 2023 10:58:29 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6294A19F0; Fri, 17 Mar 2023 03:59:08 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9C80C3F885; Fri, 17 Mar 2023 03:58:23 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , "Matthew Wilcox (Oracle)" , "Yin, Fengwei" , Yu Zhao Cc: Ryan Roberts , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Subject: [RFC PATCH 3/6] mm: Introduce try_vma_alloc_zeroed_movable_folio() Date: Fri, 17 Mar 2023 10:57:59 +0000 Message-Id: <20230317105802.2634004-4-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230317105802.2634004-1-ryan.roberts@arm.com> References: <20230317105802.2634004-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230317_035826_535878_76C361C0 X-CRM114-Status: GOOD ( 14.02 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Like vma_alloc_zeroed_movable_folio(), except it will opportunistically attempt to allocate high-order folios, retrying with lower orders all the way to order-0, until success. The user must check what they got with folio_order(). This will be used to oportunistically allocate large folios for anonymous memory with a sensible fallback under pressure. For attempts to allocate non-0 orders, we set __GFP_NORETRY to prevent high latency due to reclaim, instead preferring to just try for a lower order. The same approach is used by the readahead code when allocating large folios. Signed-off-by: Ryan Roberts --- mm/memory.c | 27 ++++++++++++++++++++++++--- 1 file changed, 24 insertions(+), 3 deletions(-) -- 2.25.1 diff --git a/mm/memory.c b/mm/memory.c index 8798da968686..c9e09415ee18 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3024,6 +3024,27 @@ static inline void wp_page_reuse(struct vm_fault *vmf) count_vm_event(PGREUSE); } +/* + * Opportunistically attempt to allocate high-order folios, retrying with lower + * orders all the way to order-0, until success. The user must check what they + * got with folio_order(). + */ +static struct folio *try_vma_alloc_zeroed_movable_folio( + struct vm_area_struct *vma, + unsigned long vaddr, int order) +{ + struct folio *folio; + gfp_t gfp = __GFP_NORETRY | __GFP_NOWARN; + + for (; order > 0; order--) { + folio = vma_alloc_zeroed_movable_folio(vma, vaddr, gfp, order); + if (folio) + return folio; + } + + return vma_alloc_zeroed_movable_folio(vma, vaddr, 0, 0); +} + /* * Handle the case of a page which we actually need to copy to a new page, * either due to COW or unsharing. @@ -3061,8 +3082,8 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) goto oom; if (is_zero_pfn(pte_pfn(vmf->orig_pte))) { - new_folio = vma_alloc_zeroed_movable_folio(vma, vmf->address, - 0, 0); + new_folio = try_vma_alloc_zeroed_movable_folio(vma, + vmf->address, 0); if (!new_folio) goto oom; } else { @@ -4050,7 +4071,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) /* Allocate our own private page. */ if (unlikely(anon_vma_prepare(vma))) goto oom; - folio = vma_alloc_zeroed_movable_folio(vma, vmf->address, 0, 0); + folio = try_vma_alloc_zeroed_movable_folio(vma, vmf->address, 0); if (!folio) goto oom;