From patchwork Tue Feb 27 10:42:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Barry Song <21cnbao@gmail.com> X-Patchwork-Id: 13573532 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C184EC54798 for ; Tue, 27 Feb 2024 10:42:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 46EA380014; Tue, 27 Feb 2024 05:42:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 41EA6940008; Tue, 27 Feb 2024 05:42:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2BFBC80014; Tue, 27 Feb 2024 05:42:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 1CE91940008 for ; Tue, 27 Feb 2024 05:42:22 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id CED0780A65 for ; Tue, 27 Feb 2024 10:42:21 +0000 (UTC) X-FDA: 81837244482.09.FC7228A Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) by imf25.hostedemail.com (Postfix) with ESMTP id 37EABA0021 for ; Tue, 27 Feb 2024 10:42:19 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=SpwjNfd5; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf25.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.214.170 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709030540; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=xc/mHpkbpsXJNXubnJiBOjrnoRZ+b03kcs1waaJkuu8=; b=2UAV5AI2jZv6vw1p2U0fnt40d2MpHnkBKBmDnSH1MuGwsBukDOzXBizklmHz1cYIocsRVa 6Kcx7ieyKqpiMe6i2a8Rs1nRTE+3KeX65CaE06+YWpRdTweZk1pmvB8PYEQy/4JiIgjS4o IyG6DBB8+YwYObCevVwTxZb6kasjh4c= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=SpwjNfd5; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf25.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.214.170 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709030540; a=rsa-sha256; cv=none; b=maOfDtB96NrzVdvlkkWADQe51LbFS46Ea8MWOFxf4fxJMx759YVqC/Nt+ml+ZO0vSAfmus it2AhW42BpYkjdYt35nIKxy/+bZZ62082towOh7rK12LrrxwtPZ4TFZvSuZmSThqWOLi3K Bh/BKAixq5LKMlQqqpdKBEtseo68Dw0= Received: by mail-pl1-f170.google.com with SMTP id d9443c01a7336-1dcafff3c50so11929625ad.0 for ; Tue, 27 Feb 2024 02:42:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1709030539; x=1709635339; darn=kvack.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=xc/mHpkbpsXJNXubnJiBOjrnoRZ+b03kcs1waaJkuu8=; b=SpwjNfd5ssdaLDmtZII/Ix24DnMNHOo7puTrmG1b7BW2bFRJwY3hDY8m0CidEjgYfw rm1dy8mLHfOq/ctQ3bOIZJulwNqP6+RQkvcHgU0OYLitDsKxc41NvDVSNtizujiOvDFC AtjJ6pX6SxDu/JZ1w3/EArrdum0irB4rucwUKkZpmbHenr54dh5EqHgMg2RYx7K7JB3g vGeUUbSYIVmBiY1BGKtbC8OWWBS1x1nONlIb916HPcvf9JVfFBbx5aSBByC7zOX4Trw3 C7uwthmH9kDBPFIDcnGEL9vayB2xVPvLI4OwGhwQ/mlDqCzqbFkIB5Ke4Q5OiP/w1+q5 VO9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709030539; x=1709635339; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=xc/mHpkbpsXJNXubnJiBOjrnoRZ+b03kcs1waaJkuu8=; b=gaq3qBBX4M5JOfULMLTVrwckFLPzqKrdQ8pjD3v/aJHczBPWDG8QxcK4NF7eyWXJKT QlWZ8THvmf8DPImZqDvodh9OglOWfDtYd9/PdG3LUbDYyKHuABhJvVefkyciFpulEIpt 4NDsQQfkJAardlpptHfPwkikjErUcKw/h0j1ZtECHCmHrQwRhxersnhuJHdaqRHtR9Nr unPB+G0K2Pvl2eEL1apZxw+fQlAHfUO/IRz2KHacpzVMnO/NlmMHJWbmi0J7/+VRMavl lvyRt6GBkWUEFF67vGWDjqnnnBCAWw4qA7x0V34QSMo7YFBfyhfpyQcLzm2f5Wzpd32r Eoxw== X-Forwarded-Encrypted: i=1; AJvYcCU3tpf016ryJF+2ja0seqeEM5PCdKcRAzlAxRvxGu9wEYGw+g4DQpdWLLASS7F4WddoP0jX4IgUZaqlHc0jcOcbZqY= X-Gm-Message-State: AOJu0YylLPDPfFaBz7CvifMPGmRsmGmvxE9K213Z0ww462xtdk5NgMEt 15JcTGi6v+FrGl2Uar03k+olvzO80jGy/yAwWB3nXnp1ZxZ/FsZo X-Google-Smtp-Source: AGHT+IFl3DlU+8FIJDJtobznhFfgjJejB3j080t0M48PmFZhrpr8DyoarR+P7+ByoKZSHtj98PZTGg== X-Received: by 2002:a17:902:e74e:b0:1dc:211f:96d0 with SMTP id p14-20020a170902e74e00b001dc211f96d0mr14194047plf.3.1709030538757; Tue, 27 Feb 2024 02:42:18 -0800 (PST) Received: from barry-desktop.hub ([2407:7000:8942:5500:fae4:3bff:fecb:410]) by smtp.gmail.com with ESMTPSA id o5-20020a170902e00500b001dbb14e6feesm1239740plo.189.2024.02.27.02.42.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Feb 2024 02:42:18 -0800 (PST) From: Barry Song <21cnbao@gmail.com> To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Barry Song , David Hildenbrand , Lance Yang , Ryan Roberts , Yin Fengwei Subject: [PATCH v2] mm: make folio_pte_batch available outside of mm/memory.c Date: Tue, 27 Feb 2024 23:42:01 +1300 Message-Id: <20240227104201.337988-1-21cnbao@gmail.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 37EABA0021 X-Stat-Signature: bzu8yd7hk7okbzb47m77q8pzjdjbrjjm X-HE-Tag: 1709030539-912889 X-HE-Meta: U2FsdGVkX1+lIuAaga1vIXIsZ+6szYCw9pjgqhn2iJ05G7MbI6Asyyqow3FJJajbI8xhzK2lQgC2G/9TJQqD72SQSbrmnibk/3bJgCZtLWS3kaIL56DlsB227ybFiZb7mwHk20tKgBC+HZdFsZLbOILh/xUf4OZH68hgFNJxMORCUQ/xmy1NHl6YwDCIyy0RB7gyJ/DuyvJNuPGcB8/uxgxIOm2P3jDvnpj4xDFbrLOitRPJAQSiFruRLIR6bqVRCTjeFfHkv/ftZwBLZAiIZwANofUZlTI40PX8VRy6RR7rvYLvj21BDEzRiGjVuF/+zHR9f0O3f5WJ01Q6BWrqZ7iys5io4Ak3p8QesaEbeq6Lg8IW/oHP0TzyZG5SYngNngAltzraL1Jvzsha8NNDzQGrpISd2a6K7aUYod7ICYTut3X94JYcB6fKcg4tBAy9v8SrfLXaKzzEI1QdIyYPTrDMLkLUqr8SfzofvmN2foDviwSnBa/57EJaoIGG59Q3iyW0mA0SCltC0AfcDMUwEyqDCORkAuTBCdR2fk37erEBgV5mJ+9seYMIH07fCuRvtSJjCbUECjI5RfpOc94Ql2npvOE3cNvW5kxcDaSInnSQY3YwjYxw8Jhqzfaf8oEum6JUGl8eAAKau6jqf7+7ylKnmw3bFWnsMdAg6WAb5+6CLU4Ls3L/qQDXALSbKVSTZ5S0aWtWSLe79Ve49vd8wfGrB8JE2dEgAucUauNVpuofees9Z6FuD49LdfDUxIC9bMK9F35wO/elousKCfvUlbHCAMAxRO/53DWWa3WdropDuEWfVHOb3Zf2pjrbkBFY5Cyn3vHzFaaSDInVQwiBgDyi/7+BxhFpvWK+JHjNi7O4oNQzFXWIGnujYH/OzOa7xIJgu6csi/AZ/LM1RqC7HrKWNYUQKGcuZ47RYUvaeggq74TOC9ZVFqVKSK2RXBK75KGcoyRyzCdYRqTE8fW BwKfMwga mowFIJahAkfwWt/bCJuZluAgfqe48yWm9DqbTp+eSEMRyxDqqSKftLRKkUrSnZW+4tFPXjKzeAj3PjGs9EHqRpi03hvSDL1d5Fjjm1ut482Ps3Wb+G4vxTOjcoS5vJinnRgS3dth6Bd26BeqRr4W9nudl+jFLyKoAGbII X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Barry Song madvise, mprotect and some others might need folio_pte_batch to check if a range of PTEs are completely mapped to a large folio with contiguous physical addresses. Let's make it available in mm/internal.h. Suggested-by: David Hildenbrand Cc: Lance Yang Cc: Ryan Roberts Cc: Yin Fengwei [david@redhat.com: improve the doc for the exported func] Signed-off-by: David Hildenbrand Signed-off-by: Barry Song Reviewed-by: Ryan Roberts Acked-by: David Hildenbrand --- -v2: * inline folio_pte_batch according to Ryan and David; * improve the doc, thanks to David's work on this; * fix tags of David and add David's s-o-b; -v1: https://lore.kernel.org/all/20240227024050.244567-1-21cnbao@gmail.com/ mm/internal.h | 90 +++++++++++++++++++++++++++++++++++++++++++++++++++ mm/memory.c | 76 ------------------------------------------- 2 files changed, 90 insertions(+), 76 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 13b59d384845..fa9e2f7db506 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -83,6 +83,96 @@ static inline void *folio_raw_mapping(struct folio *folio) return (void *)(mapping & ~PAGE_MAPPING_FLAGS); } +/* Flags for folio_pte_batch(). */ +typedef int __bitwise fpb_t; + +/* Compare PTEs after pte_mkclean(), ignoring the dirty bit. */ +#define FPB_IGNORE_DIRTY ((__force fpb_t)BIT(0)) + +/* Compare PTEs after pte_clear_soft_dirty(), ignoring the soft-dirty bit. */ +#define FPB_IGNORE_SOFT_DIRTY ((__force fpb_t)BIT(1)) + +static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags) +{ + if (flags & FPB_IGNORE_DIRTY) + pte = pte_mkclean(pte); + if (likely(flags & FPB_IGNORE_SOFT_DIRTY)) + pte = pte_clear_soft_dirty(pte); + return pte_wrprotect(pte_mkold(pte)); +} + +/** + * folio_pte_batch - detect a PTE batch for a large folio + * @folio: The large folio to detect a PTE batch for. + * @addr: The user virtual address the first page is mapped at. + * @start_ptep: Page table pointer for the first entry. + * @pte: Page table entry for the first page. + * @max_nr: The maximum number of table entries to consider. + * @flags: Flags to modify the PTE batch semantics. + * @any_writable: Optional pointer to indicate whether any entry except the + * first one is writable. + * + * Detect a PTE batch: consecutive (present) PTEs that map consecutive + * pages of the same large folio. + * + * All PTEs inside a PTE batch have the same PTE bits set, excluding the PFN, + * the accessed bit, writable bit, dirty bit (with FPB_IGNORE_DIRTY) and + * soft-dirty bit (with FPB_IGNORE_SOFT_DIRTY). + * + * start_ptep must map any page of the folio. max_nr must be at least one and + * must be limited by the caller so scanning cannot exceed a single page table. + * + * Return: the number of table entries in the batch. + */ +static inline int folio_pte_batch(struct folio *folio, unsigned long addr, + pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags, + bool *any_writable) +{ + unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio); + const pte_t *end_ptep = start_ptep + max_nr; + pte_t expected_pte, *ptep; + bool writable; + int nr; + + if (any_writable) + *any_writable = false; + + VM_WARN_ON_FOLIO(!pte_present(pte), folio); + VM_WARN_ON_FOLIO(!folio_test_large(folio) || max_nr < 1, folio); + VM_WARN_ON_FOLIO(page_folio(pfn_to_page(pte_pfn(pte))) != folio, folio); + + nr = pte_batch_hint(start_ptep, pte); + expected_pte = __pte_batch_clear_ignored(pte_advance_pfn(pte, nr), flags); + ptep = start_ptep + nr; + + while (ptep < end_ptep) { + pte = ptep_get(ptep); + if (any_writable) + writable = !!pte_write(pte); + pte = __pte_batch_clear_ignored(pte, flags); + + if (!pte_same(pte, expected_pte)) + break; + + /* + * Stop immediately once we reached the end of the folio. In + * corner cases the next PFN might fall into a different + * folio. + */ + if (pte_pfn(pte) >= folio_end_pfn) + break; + + if (any_writable) + *any_writable |= writable; + + nr = pte_batch_hint(ptep, pte); + expected_pte = pte_advance_pfn(expected_pte, nr); + ptep += nr; + } + + return min(ptep - start_ptep, max_nr); +} + void __acct_reclaim_writeback(pg_data_t *pgdat, struct folio *folio, int nr_throttled); static inline void acct_reclaim_writeback(struct folio *folio) diff --git a/mm/memory.c b/mm/memory.c index 1c45b6a42a1b..a7bcc39de56b 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -953,82 +953,6 @@ static __always_inline void __copy_present_ptes(struct vm_area_struct *dst_vma, set_ptes(dst_vma->vm_mm, addr, dst_pte, pte, nr); } -/* Flags for folio_pte_batch(). */ -typedef int __bitwise fpb_t; - -/* Compare PTEs after pte_mkclean(), ignoring the dirty bit. */ -#define FPB_IGNORE_DIRTY ((__force fpb_t)BIT(0)) - -/* Compare PTEs after pte_clear_soft_dirty(), ignoring the soft-dirty bit. */ -#define FPB_IGNORE_SOFT_DIRTY ((__force fpb_t)BIT(1)) - -static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags) -{ - if (flags & FPB_IGNORE_DIRTY) - pte = pte_mkclean(pte); - if (likely(flags & FPB_IGNORE_SOFT_DIRTY)) - pte = pte_clear_soft_dirty(pte); - return pte_wrprotect(pte_mkold(pte)); -} - -/* - * Detect a PTE batch: consecutive (present) PTEs that map consecutive - * pages of the same folio. - * - * All PTEs inside a PTE batch have the same PTE bits set, excluding the PFN, - * the accessed bit, writable bit, dirty bit (with FPB_IGNORE_DIRTY) and - * soft-dirty bit (with FPB_IGNORE_SOFT_DIRTY). - * - * If "any_writable" is set, it will indicate if any other PTE besides the - * first (given) PTE is writable. - */ -static inline int folio_pte_batch(struct folio *folio, unsigned long addr, - pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags, - bool *any_writable) -{ - unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio); - const pte_t *end_ptep = start_ptep + max_nr; - pte_t expected_pte, *ptep; - bool writable; - int nr; - - if (any_writable) - *any_writable = false; - - VM_WARN_ON_FOLIO(!pte_present(pte), folio); - - nr = pte_batch_hint(start_ptep, pte); - expected_pte = __pte_batch_clear_ignored(pte_advance_pfn(pte, nr), flags); - ptep = start_ptep + nr; - - while (ptep < end_ptep) { - pte = ptep_get(ptep); - if (any_writable) - writable = !!pte_write(pte); - pte = __pte_batch_clear_ignored(pte, flags); - - if (!pte_same(pte, expected_pte)) - break; - - /* - * Stop immediately once we reached the end of the folio. In - * corner cases the next PFN might fall into a different - * folio. - */ - if (pte_pfn(pte) >= folio_end_pfn) - break; - - if (any_writable) - *any_writable |= writable; - - nr = pte_batch_hint(ptep, pte); - expected_pte = pte_advance_pfn(expected_pte, nr); - ptep += nr; - } - - return min(ptep - start_ptep, max_nr); -} - /* * Copy one present PTE, trying to batch-process subsequent PTEs that map * consecutive pages of the same folio by copying them as well.