From patchwork Mon Jan 29 12:46:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13535528 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC1B2C47422 for ; Mon, 29 Jan 2024 12:48:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 76D098D0005; Mon, 29 Jan 2024 07:48:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 71B018D0001; Mon, 29 Jan 2024 07:48:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 594F88D0005; Mon, 29 Jan 2024 07:48:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 4758A8D0001 for ; Mon, 29 Jan 2024 07:48:27 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 17E8E1605F3 for ; Mon, 29 Jan 2024 12:48:27 +0000 (UTC) X-FDA: 81732327054.06.1785E47 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf15.hostedemail.com (Postfix) with ESMTP id 679A9A0007 for ; Mon, 29 Jan 2024 12:48:25 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=UJ2fU9xx; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf15.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706532505; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=exEdDlpMqX/TLP6WakzBspRTLcII6GNZP9MIcL4XseU=; b=ZhK47t0DLUCdnbOcEabXOeoMFFQiNFIUPDfMwnoiVJDtxFImOfkr5FPRRFlI80iy8wezZd /ZyUVBTLi7F4VGLaSzZZKFuL7fG/zE+wtHqSQpXDQ5m/FC8Vw/c4BTQMlEJ4/86HI+srP9 CMb4d/H4+gD4nfwuuYOx73zwrcle2aQ= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=UJ2fU9xx; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf15.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706532505; a=rsa-sha256; cv=none; b=8m37qPoCrfzK07Qb0pGMmGQKvA+lFrdAjyDqGn/SxY0wo0d4t1kktHxrXHQuXBqjO9ErD6 GNYhRPIm4eDtPAleS9jMw2fcUAEmELVVzZR2MaHWJGgkc0KoVtji7s9J8ugYnOlGC2Pt/D 4zWx5EQa2EIWJOme3jynLeXJgkT1G1o= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1706532504; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=exEdDlpMqX/TLP6WakzBspRTLcII6GNZP9MIcL4XseU=; b=UJ2fU9xx7cQu0se8ICmieeaAeHR5j2bVyeCVGnBVHQf8lBtJ2WiZLlNYAU0QGRGJRL4bdS v4C7nQcoDhqz4Orurfwnd22gc8O0NXI152IecrmfzRfPas9vIQR9SMFPWI8EGk2Qn6T0V3 dXU8GzBqmqcQFitm6E4HDLR0PNfx7QM= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-455-q1tX4gb6M5-rogvUSokYZw-1; Mon, 29 Jan 2024 07:48:21 -0500 X-MC-Unique: q1tX4gb6M5-rogvUSokYZw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0A9C628B72E2; Mon, 29 Jan 2024 12:48:20 +0000 (UTC) Received: from t14s.fritz.box (unknown [10.39.194.46]) by smtp.corp.redhat.com (Postfix) with ESMTP id D9D738B; Mon, 29 Jan 2024 12:48:14 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , Matthew Wilcox , Ryan Roberts , Russell King , Catalin Marinas , Will Deacon , Dinh Nguyen , Michael Ellerman , Nicholas Piggin , Christophe Leroy , "Aneesh Kumar K.V" , "Naveen N. Rao" , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , "David S. Miller" , linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org Subject: [PATCH v3 14/15] mm/memory: ignore dirty/accessed/soft-dirty bits in folio_pte_batch() Date: Mon, 29 Jan 2024 13:46:48 +0100 Message-ID: <20240129124649.189745-15-david@redhat.com> In-Reply-To: <20240129124649.189745-1-david@redhat.com> References: <20240129124649.189745-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.5 X-Rspamd-Queue-Id: 679A9A0007 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: smmu6fmfi79opxbx9m1tzjkfdyjh8wtr X-HE-Tag: 1706532505-276487 X-HE-Meta: U2FsdGVkX19y7bsVGg5QCi9vDl2Fv9wBvpOLkexocweHrRDgO3O1OqPAyuFCxCOc3x6glM1qt+YXOW3VtKCb6hECC7I+fzSoG/ArkNcxObICIEXrJC3foVJ6L08ZUYn/qdfnLEQWa/tOMuyKMt5hDK0DSm3/3YXP+cvVKnNd2Q0WMmOlBeMWGPAdIIVnJGVg/BE/WmJwYXJ38GujTa7VjdLeKoWHvgWIXXjepEVeXQwVq8qe67dxWEJl6WV9AqTjK+2OOBtXQVYihoLIZemZpWCsj74zHPm8sGJdqNNc9P0oqWriRYs0WoRIyF6RWpMtc8IsA1JK6s9T8u1kL07+GYcB+sSBQD4EBLeb6JlfapdOxAbnpYouyLo56vYdB3lemT3gL39L/eK4YOAHJADIplQzugHJSuIz3uyblNjAHss07/O08+axqbwwQHkQDmvkgfD1UN9U+vPL4AqeJMUsIjfVRKumPrYW2PNgSH12FNPFwBIvTc7Pc+n2KreVla2bhwSKaCmNF8qE77tlZp4ckqnt8UIOQ6F0usfdxwqI87FKkhUVGy9D1S04vMEpZ/SPazXqOgsKlY6Fe1a4PFDq1qLBMafaTLzvwVvv5mxfXJwbZVYOZGsca8fakAEM2eT1PuUtb8r83+3rwXZFmlgW9tsmhsAQXvZfGGCbX9+LAJpEVpuce6xnTOxR11b2rK2nkgyAk676Wiq/MIP5nUEyenqZiqHaib0yec9k/nm5K0dThKxbU2DM6gviFchqeF387njBqukp8BpgUVHJCJCv+UTS7/adUtMj/yjeO3QfFNGlB11iTQEfrFYMpW4QYD5L38l2lnChzA1SxdpoDBzgGTMSX4ghpvlUm+R7EtyEvkAXoPqN0hYdPlQNnGWlmMp2ouNx7OYpfgCa4Qt96meIwFKI0mm0elMF20OxzRv8oHe9sVPmfQu0TUmCVdQjyTZX99ziJn8NOqw/ldmo8Gg k28rGGw7 /pGBx0bCVDWyBq3+RtjKHyjCWyyKMQmUsCIZJdqTBhl7LwZVBUq7teOWFDIy3Ke4n4ZdpeONCo3IbCY2t/0HhgdyYzs/tJ/xbOBRd+iL03NjsSKThVPMqeIlBd+UEHvgoWiHBJrOW5oZTK1DiF/STZAkHCeKvfa7L9UQLDQBhLw8iqmoQeKkFd9JtsezpXa/olRqfUe2yO3vrLKysBiD6C4udV4VMRCMUhZPSYN+PWe4/ovf07uXiNCjUanYmIQL7aQ1Ag4fPY9VmEV2IMFefc694pg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's always ignore the accessed/young bit: we'll always mark the PTE as old in our child process during fork, and upcoming users will similarly not care. Ignore the dirty bit only if we don't want to duplicate the dirty bit into the child process during fork. Maybe, we could just set all PTEs in the child dirty if any PTE is dirty. For now, let's keep the behavior unchanged, this can be optimized later if required. Ignore the soft-dirty bit only if the bit doesn't have any meaning in the src vma, and similarly won't have any in the copied dst vma. For now, we won't bother with the uffd-wp bit. Reviewed-by: Ryan Roberts Signed-off-by: David Hildenbrand --- mm/memory.c | 36 +++++++++++++++++++++++++++++++----- 1 file changed, 31 insertions(+), 5 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 86f8a0021c8e..b2ec2b6b54c7 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -953,24 +953,44 @@ static __always_inline void __copy_present_ptes(struct vm_area_struct *dst_vma, set_ptes(dst_vma->vm_mm, addr, dst_pte, pte, nr); } +/* Flags for folio_pte_batch(). */ +typedef int __bitwise fpb_t; + +/* Compare PTEs after pte_mkclean(), ignoring the dirty bit. */ +#define FPB_IGNORE_DIRTY ((__force fpb_t)BIT(0)) + +/* Compare PTEs after pte_clear_soft_dirty(), ignoring the soft-dirty bit. */ +#define FPB_IGNORE_SOFT_DIRTY ((__force fpb_t)BIT(1)) + +static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags) +{ + if (flags & FPB_IGNORE_DIRTY) + pte = pte_mkclean(pte); + if (likely(flags & FPB_IGNORE_SOFT_DIRTY)) + pte = pte_clear_soft_dirty(pte); + return pte_mkold(pte); +} + /* * Detect a PTE batch: consecutive (present) PTEs that map consecutive * pages of the same folio. * - * All PTEs inside a PTE batch have the same PTE bits set, excluding the PFN. + * All PTEs inside a PTE batch have the same PTE bits set, excluding the PFN, + * the accessed bit, dirty bit (with FPB_IGNORE_DIRTY) and soft-dirty bit + * (with FPB_IGNORE_SOFT_DIRTY). */ static inline int folio_pte_batch(struct folio *folio, unsigned long addr, - pte_t *start_ptep, pte_t pte, int max_nr) + pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags) { unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio); const pte_t *end_ptep = start_ptep + max_nr; - pte_t expected_pte = pte_next_pfn(pte); + pte_t expected_pte = __pte_batch_clear_ignored(pte_next_pfn(pte), flags); pte_t *ptep = start_ptep + 1; VM_WARN_ON_FOLIO(!pte_present(pte), folio); while (ptep != end_ptep) { - pte = ptep_get(ptep); + pte = __pte_batch_clear_ignored(ptep_get(ptep), flags); if (!pte_same(pte, expected_pte)) break; @@ -1004,6 +1024,7 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma { struct page *page; struct folio *folio; + fpb_t flags = 0; int err, nr; page = vm_normal_page(src_vma, addr, pte); @@ -1018,7 +1039,12 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma * by keeping the batching logic separate. */ if (unlikely(!*prealloc && folio_test_large(folio) && max_nr != 1)) { - nr = folio_pte_batch(folio, addr, src_pte, pte, max_nr); + if (src_vma->vm_flags & VM_SHARED) + flags |= FPB_IGNORE_DIRTY; + if (!vma_soft_dirty_enabled(src_vma)) + flags |= FPB_IGNORE_SOFT_DIRTY; + + nr = folio_pte_batch(folio, addr, src_pte, pte, max_nr, flags); folio_ref_add(folio, nr); if (folio_test_anon(folio)) { if (unlikely(folio_try_dup_anon_rmap_ptes(folio, page,