From patchwork Thu Feb 15 10:32:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13557868 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACE61C4829E for ; Thu, 15 Feb 2024 10:33:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 43F9B8D0024; Thu, 15 Feb 2024 05:33:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3C6F48D000E; Thu, 15 Feb 2024 05:33:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 21A228D0024; Thu, 15 Feb 2024 05:33:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 025368D000E for ; Thu, 15 Feb 2024 05:33:24 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id D081DA07AE for ; Thu, 15 Feb 2024 10:33:23 +0000 (UTC) X-FDA: 81793676286.09.312803A Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf16.hostedemail.com (Postfix) with ESMTP id 3A3BD180017 for ; Thu, 15 Feb 2024 10:33:22 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=none; spf=pass (imf16.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707993202; a=rsa-sha256; cv=none; b=RawWUb/4xsW05UowJTw5uF7h3E7Y7JFh7g4QfxSbqUB7Paex92jQEQ2xK4lz8rKIPR6nwz aZnlOdN+p5UTjqmRCddYhCmNcAssz1SxgINc2j5Pds5yk19G4/GLo0Hx+Om3QRVvk34nOa WJkiShVTMmQYX17uaShuH5d8Sh2CCXM= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=none; spf=pass (imf16.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707993202; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Zz5h8gFoLcQ5F5jny0TZcnql6XjfJNjA5U+ixiuRaqk=; b=iEZwWbfgqhQDFXdbUXc0qTarrffcUAcsdzjWTEnO2oCzieDgWCDt7qYfQSGk9q69dCwbU8 PqckH0JLf4YxZ8AxoIfq6t9mnAAAOUHsTVh4yInYqo4OgcBW3k1FVI138qBX/90kVL2lRD dp8A8kwCLE/QYT8Npi7Xk2U8ddknPmY= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5B7551650; Thu, 15 Feb 2024 02:34:02 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 527B93F7B4; Thu, 15 Feb 2024 02:33:18 -0800 (PST) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Ard Biesheuvel , Marc Zyngier , James Morse , Andrey Ryabinin , Andrew Morton , Matthew Wilcox , Mark Rutland , David Hildenbrand , Kefeng Wang , John Hubbard , Zi Yan , Barry Song <21cnbao@gmail.com>, Alistair Popple , Yang Shi , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, x86@kernel.org, linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v6 18/18] arm64/mm: Automatically fold contpte mappings Date: Thu, 15 Feb 2024 10:32:05 +0000 Message-Id: <20240215103205.2607016-19-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240215103205.2607016-1-ryan.roberts@arm.com> References: <20240215103205.2607016-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 3A3BD180017 X-Stat-Signature: qo7xt15sft8iptksgjei9y36mw43p4oj X-HE-Tag: 1707993202-195522 X-HE-Meta: U2FsdGVkX1+D2ZskTCmlX7t6njMQZLTtESj7oaBVXRc8z3aDcQU1P90k/4BYesKEyI0EwYP1xiqWgpVeyJJr8ZTqJeWUzCdriE31vVHPPB1o3R+zM65w2BlUWnY6uRkFNBatDB1zHh20ApjrLonzosgDlJxcmdBZ30IRIPepaiQJouNOTRvIqwFM3269t7ZRmz7RmH4+1V38MZMOvWWI1myt72MM7XaPbNvuLXdZNHsbzwD9HstMIVlhtuXjpVtL7XvhxMgFOQUrx0Jv+3rRGlUr/th1vj8GMydMxbvJafLxjEilsr9uPWkq+RizR909u+u5rorKy/43vDIPHpQ0joK2s0P9tOOrj8Cj6YvKq0gTKffq40MZ19qSfCmsZw+HZLlkcgWGE1hsRG+hapSdk0MmVJNQ7oCYX5MZpS/XeBZouW6G1/B7qb9Ke9PqvMyN6W2u3Id7+zdHIM2eglgCHyTsH9hR7qgnPEDKIXsQWnQVciqhYqG1etgtAlkzt20CREzcs9Ibsh8cn8POp6dDDHgd1+nRsF6RWX4XcDWLisPepTddTWdGq9pm+jtrUtVqLoPuY9fOvuJ/bmoB9DYwxecW1ZdgtUkHyi9JsWeXfr82l0CjbaUS+1jJy1asX8vK9x5A0pHMik5vGSCZiPKgAbL1Lfv7YyMg6oykphhojTqMVVjsXzECTOFyZD8+/7uw7+72lLcJbe33DdOfBiO0xjx1hbTpzhLqkl6OwQLFc0HvjZXQee/L1I1BsYz+WUIp3nsWjPjfz6kZeMArbCMIi14x9Kz01C3sTH6GwWflOw9mGHuNDbm8dHWiA87eujg6mc7nBnIoC/5g4EGTDZuofZJeYVSthVaKnfBDWbGkT7xgRsaVpyhjTz6dc7yEmzvesZNViyt05pOmjKlyqiD0i7EfK+2Ak39Zf/X5VYGavwAME5Iv1KJss8ru+eVtrZDzX9dPk2I7kYQcO4SZUfE ebdQtycZ GpA1kgJ5aSOIuaLbnx3sSQjmaEKyO8otJKCYfBSPTDCX8o5UNxP0iWttSyJOI4+rjqy+mnQau5PhZZ+7HQExi4qUsJxRJSL2GaMvfD1HZY3uXWbrkhWTP5s+nR+8XvCpL95fSi7wQOZvyBY11EI9rnCcrAZ9Amp6c4LbNM8FgKBOLIHd+8eYHHOGpQQfpf0winwPEbu8G7ConKfAKTcNT7QWWZG5TT21T77kruXOt5DhKf1g= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There are situations where a change to a single PTE could cause the contpte block in which it resides to become foldable (i.e. could be repainted with the contiguous bit). Such situations arise, for example, when user space temporarily changes protections, via mprotect, for individual pages, such can be the case for certain garbage collectors. We would like to detect when such a PTE change occurs. However this can be expensive due to the amount of checking required. Therefore only perform the checks when an indiviual PTE is modified via mprotect (ptep_modify_prot_commit() -> set_pte_at() -> set_ptes(nr=1)) and only when we are setting the final PTE in a contpte-aligned block. Signed-off-by: Ryan Roberts Acked-by: Mark Rutland Acked-by: Catalin Marinas --- arch/arm64/include/asm/pgtable.h | 26 +++++++++++++ arch/arm64/mm/contpte.c | 64 ++++++++++++++++++++++++++++++++ 2 files changed, 90 insertions(+) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 8310875133ff..401087e8a43d 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1185,6 +1185,8 @@ extern void ptep_modify_prot_commit(struct vm_area_struct *vma, * where it is possible and makes sense to do so. The PTE_CONT bit is considered * a private implementation detail of the public ptep API (see below). */ +extern void __contpte_try_fold(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte); extern void __contpte_try_unfold(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte); extern pte_t contpte_ptep_get(pte_t *ptep, pte_t orig_pte); @@ -1206,6 +1208,29 @@ extern int contpte_ptep_set_access_flags(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, pte_t entry, int dirty); +static __always_inline void contpte_try_fold(struct mm_struct *mm, + unsigned long addr, pte_t *ptep, pte_t pte) +{ + /* + * Only bother trying if both the virtual and physical addresses are + * aligned and correspond to the last entry in a contig range. The core + * code mostly modifies ranges from low to high, so this is the likely + * the last modification in the contig range, so a good time to fold. + * We can't fold special mappings, because there is no associated folio. + */ + + const unsigned long contmask = CONT_PTES - 1; + bool valign = ((addr >> PAGE_SHIFT) & contmask) == contmask; + + if (unlikely(valign)) { + bool palign = (pte_pfn(pte) & contmask) == contmask; + + if (unlikely(palign && + pte_valid(pte) && !pte_cont(pte) && !pte_special(pte))) + __contpte_try_fold(mm, addr, ptep, pte); + } +} + static __always_inline void contpte_try_unfold(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte) { @@ -1286,6 +1311,7 @@ static __always_inline void set_ptes(struct mm_struct *mm, unsigned long addr, if (likely(nr == 1)) { contpte_try_unfold(mm, addr, ptep, __ptep_get(ptep)); __set_ptes(mm, addr, ptep, pte, 1); + contpte_try_fold(mm, addr, ptep, pte); } else { contpte_set_ptes(mm, addr, ptep, pte, nr); } diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c index 50e0173dc5ee..16788f07716d 100644 --- a/arch/arm64/mm/contpte.c +++ b/arch/arm64/mm/contpte.c @@ -73,6 +73,70 @@ static void contpte_convert(struct mm_struct *mm, unsigned long addr, __set_ptes(mm, start_addr, start_ptep, pte, CONT_PTES); } +void __contpte_try_fold(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte) +{ + /* + * We have already checked that the virtual and pysical addresses are + * correctly aligned for a contpte mapping in contpte_try_fold() so the + * remaining checks are to ensure that the contpte range is fully + * covered by a single folio, and ensure that all the ptes are valid + * with contiguous PFNs and matching prots. We ignore the state of the + * access and dirty bits for the purpose of deciding if its a contiguous + * range; the folding process will generate a single contpte entry which + * has a single access and dirty bit. Those 2 bits are the logical OR of + * their respective bits in the constituent pte entries. In order to + * ensure the contpte range is covered by a single folio, we must + * recover the folio from the pfn, but special mappings don't have a + * folio backing them. Fortunately contpte_try_fold() already checked + * that the pte is not special - we never try to fold special mappings. + * Note we can't use vm_normal_page() for this since we don't have the + * vma. + */ + + unsigned long folio_start, folio_end; + unsigned long cont_start, cont_end; + pte_t expected_pte, subpte; + struct folio *folio; + struct page *page; + unsigned long pfn; + pte_t *orig_ptep; + pgprot_t prot; + + int i; + + if (!mm_is_user(mm)) + return; + + page = pte_page(pte); + folio = page_folio(page); + folio_start = addr - (page - &folio->page) * PAGE_SIZE; + folio_end = folio_start + folio_nr_pages(folio) * PAGE_SIZE; + cont_start = ALIGN_DOWN(addr, CONT_PTE_SIZE); + cont_end = cont_start + CONT_PTE_SIZE; + + if (folio_start > cont_start || folio_end < cont_end) + return; + + pfn = ALIGN_DOWN(pte_pfn(pte), CONT_PTES); + prot = pte_pgprot(pte_mkold(pte_mkclean(pte))); + expected_pte = pfn_pte(pfn, prot); + orig_ptep = ptep; + ptep = contpte_align_down(ptep); + + for (i = 0; i < CONT_PTES; i++) { + subpte = pte_mkold(pte_mkclean(__ptep_get(ptep))); + if (!pte_same(subpte, expected_pte)) + return; + expected_pte = pte_advance_pfn(expected_pte, 1); + ptep++; + } + + pte = pte_mkcont(pte); + contpte_convert(mm, addr, orig_ptep, pte); +} +EXPORT_SYMBOL(__contpte_try_fold); + void __contpte_try_unfold(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte) {