From patchwork Thu Mar 17 06:50:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: bibo mao X-Patchwork-Id: 12783594 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6781FC433F5 for ; Thu, 17 Mar 2022 06:50:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C3D3C6B0071; Thu, 17 Mar 2022 02:50:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BC60F8D0003; Thu, 17 Mar 2022 02:50:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A8DE68D0002; Thu, 17 Mar 2022 02:50:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0060.hostedemail.com [216.40.44.60]) by kanga.kvack.org (Postfix) with ESMTP id 9ABA26B0071 for ; Thu, 17 Mar 2022 02:50:36 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 522EA8248D52 for ; Thu, 17 Mar 2022 06:50:36 +0000 (UTC) X-FDA: 79252954872.20.511247C Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by imf02.hostedemail.com (Postfix) with ESMTP id 07F2C80004 for ; Thu, 17 Mar 2022 06:50:34 +0000 (UTC) Received: from localhost.localdomain (unknown [10.2.5.185]) by mail.loongson.cn (Coremail) with SMTP id AQAAf9BxsM452jJiALcKAA--.34017S2; Thu, 17 Mar 2022 14:50:33 +0800 (CST) From: Bibo Mao To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Anshuman Khandual Subject: [PATCH v2] mm: add access/dirty bit on numa page fault Date: Thu, 17 Mar 2022 02:50:33 -0400 Message-Id: <20220317065033.2635123-1-maobibo@loongson.cn> X-Mailer: git-send-email 2.31.1 MIME-Version: 1.0 X-CM-TRANSID: AQAAf9BxsM452jJiALcKAA--.34017S2 X-Coremail-Antispam: 1UD129KBjvJXoW7uFyrXryDAw48Jr17urWDCFg_yoW8Cw47pF 93C3yjgFsrXrn7Aa13Grn0yr15Xa4kKa48Cr9xtw1Yqws8Wrn7uayUWayF9ayDKry8tws8 Jr4j9FW09FsrZaUanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnUUvcSsGvfC2KfnxnUUI43ZEXa7xR_UUUUUUUUU== X-CM-SenderInfo: xpdruxter6z05rqj20fqof0/ X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 07F2C80004 X-Stat-Signature: rsd5eq755zsapuk7c33nnh9dnmomxz5g X-Rspam-User: Authentication-Results: imf02.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf02.hostedemail.com: domain of maobibo@loongson.cn designates 114.242.206.163 as permitted sender) smtp.mailfrom=maobibo@loongson.cn X-HE-Tag: 1647499834-616133 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On platforms like x86/arm which supports hw page walking, access and dirty bit is set by hw, however on some platforms without such hw functions, access and dirty bit is set by software in next trap. During numa page fault, dirty bit can be added for old pte if fail to migrate on write fault. And if it succeeds to migrate, access bit can be added for migrated new pte, also dirty bit can be added for write fault. Signed-off-by: Bibo Mao --- mm/memory.c | 21 ++++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/mm/memory.c b/mm/memory.c index c125c4969913..65813bec9c06 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4404,6 +4404,22 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) if (migrate_misplaced_page(page, vma, target_nid)) { page_nid = target_nid; flags |= TNF_MIGRATED; + + /* + * update pte entry with access bit, and dirty bit for + * write fault + */ + spin_lock(vmf->ptl); + pte = *vmf->pte; + pte = pte_mkyoung(pte); + if (was_writable) { + pte = pte_mkwrite(pte); + if (vmf->flags & FAULT_FLAG_WRITE) + pte = pte_mkdirty(pte); + } + set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte); + update_mmu_cache(vma, vmf->address, vmf->pte); + pte_unmap_unlock(vmf->pte, vmf->ptl); } else { flags |= TNF_MIGRATE_FAIL; vmf->pte = pte_offset_map(vmf->pmd, vmf->address); @@ -4427,8 +4443,11 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) old_pte = ptep_modify_prot_start(vma, vmf->address, vmf->pte); pte = pte_modify(old_pte, vma->vm_page_prot); pte = pte_mkyoung(pte); - if (was_writable) + if (was_writable) { pte = pte_mkwrite(pte); + if (vmf->flags & FAULT_FLAG_WRITE) + pte = pte_mkdirty(pte); + } ptep_modify_prot_commit(vma, vmf->address, vmf->pte, old_pte, pte); update_mmu_cache(vma, vmf->address, vmf->pte); pte_unmap_unlock(vmf->pte, vmf->ptl);