From patchwork Wed Sep 4 10:09:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dev Jain X-Patchwork-Id: 13790392 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6C305CA0ED3 for ; Wed, 4 Sep 2024 10:37:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-Type: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Owner; bh=lytgOqfDV4bDG82+WFMo7btweI4QjGE8z3T6eiE8ulI=; b=LcZvxKREJm5jhAOEf+R7RG4Ot1 b+geexIYApeRPTiFR/yOaGpxtGQZlJmrR8/YMGqULtHoQArNzn0r1uEqA00moiuxnSGCqci17CT8S WxsC1ovWU+SK2i4rsYWciqdKvR6w7Oll2mJ2emOLrjEynIMIA+i6auhkTnDTTMLQN8tJEpBKtHqic +zw27An7ETMe+/jmmtWfhRq6iJO8VMnvWSsj0+lyDJYTxweV4I9uN0YOtihBqjZk68JA4xUW1CmOA aJRwSzL4lJjbdi1D/hZpcd0pw4rT8uSV6NNAP0HcXl1nZJsVtCgIqfUc9eDE2rJSLWsT7uG42QwCb bHEwfiUQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1slnNw-00000003xJM-1J5N; Wed, 04 Sep 2024 10:37:12 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1slmxL-00000003qER-3dQm for linux-arm-kernel@lists.infradead.org; Wed, 04 Sep 2024 10:09:45 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0AE1FFEC; Wed, 4 Sep 2024 03:10:09 -0700 (PDT) Received: from e116581.blr.arm.com (e116581.arm.com [10.162.43.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 298A33F66E; Wed, 4 Sep 2024 03:09:33 -0700 (PDT) From: Dev Jain To: akpm@linux-foundation.org, david@redhat.com, willy@infradead.org, kirill.shutemov@linux.intel.com Cc: ryan.roberts@arm.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, cl@gentwo.org, vbabka@suse.cz, mhocko@suse.com, apopple@nvidia.com, dave.hansen@linux.intel.com, will@kernel.org, baohua@kernel.org, jack@suse.cz, mark.rutland@arm.com, hughd@google.com, aneesh.kumar@kernel.org, yang@os.amperecomputing.com, peterx@redhat.com, ioworker0@gmail.com, jglisse@google.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Dev Jain Subject: [PATCH v2 0/2] Do not shatter hugezeropage on wp-fault Date: Wed, 4 Sep 2024 15:39:21 +0530 Message-Id: <20240904100923.290042-1-dev.jain@arm.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240904_030943_973492_D58A4388 X-CRM114-Status: UNSURE ( 9.91 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org It was observed at [1] and [2] that the current kernel behaviour of shattering a hugezeropage is inconsistent and suboptimal. For a VMA with a THP allowable order, when we write-fault on it, the kernel installs a PMD-mapped THP. On the other hand, if we first get a read fault, we get a PMD pointing to the hugezeropage; subsequent write will trigger a write-protection fault, shattering the hugezeropage into one writable page, and all the other PTEs write-protected. The conclusion being, as compared to the case of a single write-fault, applications have to suffer 512 extra page faults if they were to use the VMA as such, plus we get the overhead of khugepaged trying to replace that area with a THP anyway. Instead, replace the hugezeropage with a THP on wp-fault. v1->v2: - Wrap do_huge_zero_wp_pmd_locked() around lock and unlock - Call thp_fault_alloc() before do_huge_zero_wp_pmd_locked() to avoid - calling sleeping function from spinlock context [1]: https://lore.kernel.org/all/3743d7e1-0b79-4eaf-82d5-d1ca29fe347d@arm.com/ [2]: https://lore.kernel.org/all/1cfae0c0-96a2-4308-9c62-f7a640520242@arm.com/ Dev Jain (2): mm: Abstract THP allocation mm: Allocate THP on hugezeropage wp-fault include/linux/huge_mm.h | 6 ++ mm/huge_memory.c | 171 +++++++++++++++++++++++++++++----------- mm/memory.c | 5 +- 3 files changed, 136 insertions(+), 46 deletions(-)