From patchwork Fri Apr 11 09:16:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Brodsky X-Patchwork-Id: 14047908 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 252C3C36010 for ; Fri, 11 Apr 2025 09:17:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B5B6F28019F; Fri, 11 Apr 2025 05:17:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B0D4828019B; Fri, 11 Apr 2025 05:17:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9D3D128019F; Fri, 11 Apr 2025 05:17:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 7D17228019B for ; Fri, 11 Apr 2025 05:17:39 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 2D82D162285 for ; Fri, 11 Apr 2025 09:17:40 +0000 (UTC) X-FDA: 83321210280.15.BA9A7B4 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf16.hostedemail.com (Postfix) with ESMTP id 90113180006 for ; Fri, 11 Apr 2025 09:17:38 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=none; spf=pass (imf16.hostedemail.com: domain of kevin.brodsky@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=kevin.brodsky@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1744363058; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HDIDxyDyfX0gdKCGHJVPMPRL70ks6utR2npUV1XvMpc=; b=2EkqGVj3x4zitkZJRCk31lCQByZKOsEjqe/Le1dkxegtTjOOMJ3Vw+xDUbWzba+NFXyB3J 9AOCD90hLExDlubuzmn+TVEvB4fQCJSjfbD5yv6wRla+eLv+WzPQcd1jr9L3jM5n4tLX3f NaBDNB2Cl6NdvviXJ6dtbmwS9YteijQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1744363058; a=rsa-sha256; cv=none; b=r15mEFoNKiS/AYVSt1v+GsEWlMDTvkPtL+UxYQDsxg0kqzeTqCaj+5cyIL4RN7tu+Wh/YK Im70cRKqCT6scXeA0mAFMf9gtNHDKsHe9GOgQQ4Meu6lVQJgztwpjMSEbwhR/J2sW10PV5 K/MrurvWevhLMEif0ePDbEk0Gjnccq4= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=none; spf=pass (imf16.hostedemail.com: domain of kevin.brodsky@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=kevin.brodsky@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AD61A1596; Fri, 11 Apr 2025 02:17:37 -0700 (PDT) Received: from e123572-lin.arm.com (e123572-lin.cambridge.arm.com [10.1.194.54]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 286563F6A8; Fri, 11 Apr 2025 02:17:34 -0700 (PDT) From: Kevin Brodsky To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Kevin Brodsky , Andrew Morton , Mark Brown , Catalin Marinas , Dave Hansen , David Hildenbrand , Ira Weiny , Jann Horn , Jeff Xu , Joey Gouly , Kees Cook , Linus Walleij , Andy Lutomirski , Marc Zyngier , Peter Zijlstra , Pierre Langlois , Quentin Perret , Rick Edgecombe , "Mike Rapoport (IBM)" , Ryan Roberts , Thomas Gleixner , Will Deacon , Matthew Wilcox , Qi Zheng , linux-arm-kernel@lists.infradead.org, x86@kernel.org Subject: [RFC PATCH v4 10/18] mm: Introduce kernel_pgtables_set_pkey() Date: Fri, 11 Apr 2025 10:16:23 +0100 Message-ID: <20250411091631.954228-11-kevin.brodsky@arm.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20250411091631.954228-1-kevin.brodsky@arm.com> References: <20250411091631.954228-1-kevin.brodsky@arm.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 90113180006 X-Stat-Signature: irkcwao8qip16dchcndrrn7a6ou4bgco X-HE-Tag: 1744363058-389307 X-HE-Meta: U2FsdGVkX1+VnCF/371kbHF98JNNFjC19svUSnN4MVMIsH21yPDLqMUYDF7AdscmaRA+Nu4XL/XMKwnKwSO0HAkgm2YrS42w2PsEIayKQfSAFDKoy8+7SW7it0dbfT9ZPxiDoen2HGgHBd8FShAJc6sXoN0aQE3b+dPPIDGaMdVan5MBjY8dzC901DQtMQ4RirsDHRrrvDlfFbrhXg0AVW4TTkBQcvvPpizR/DLgiWU1P2Eq7FIhvjbQgr69tClzPKmW5rbEpo6CgeRKetmdlbO1CDwjAuvPZXnErN0sklI5XqxLb+vAa8+bZQGPZU6Mu3rvggFSAWp0h/hP+BH9XKnm6QJolSPrPydrvt5Sm8orXBUc/gXHldyjzXwDGZzzB6UNKN23ir3YWuH9PuqOzN6DjT6D5XU2UMFvUTlPBRjmNy7bFL+gPmaLgBArJOTEw1iP5GxwrQ2oM47PYGy/i2fP7giTkJDfh2a84A/KnHZh9afso8JXWHyjd6WWhbI7KQfy3K5PRssmQlfbPH9hwcPV5Q+AvlVD8qb2eX+wZZrTFfhC5MzS3GUE6Wq1lFAVLLleOWKL+D8VzJBpm4yMv40AE+mVyrORJgDT2GNXY3wn+MpY9+PWLNj1Kg47H+9VOQBuNZJ6yKE91i/cXAJx5dso2pt3LFz+hMKiry5LRzVunxiKgfFjidYNpDkjm/3tE7BLjX9/5+RayIxxA8xezcy+Qow90RhgrnGk5YoDtoWwgsc/qG0BHjLoI2qeMX2NEFOB1tYLGycEzyET9KOhAZqQmyLiAlqBcal7bpHj6ZUXHhLADI7POgOQAD1YUaeCHSq4KoldFLj7icDiJagowxZ0fcPAKXQtC2eM8dPknn1Gk76OH6D7B2DAkLy9BFPKKdF5ujYOpTre/5ATrJVbDpuM9/8kiw1FJgFaSDDpdVJq0sAvYggrtHcPiIUohxWsZHn9QK1+sadngr8DZ2q /g5B3ZDa zDFkJfcZGaISQ76nibYzIOniYcQ0AwEIyrFphI7X9PeYIz9MxwI98AoOB1oiCRsAqneWQLKWNqu3D/1YwBldh9sen7dTcobSw16SspRc3Lk5KcEDYreWd1BbFdClKldaKu/nefXF/4pWh2PCdZkFbL+TTXmY81FpKcmSKc6PBMdqX63mAhT8BBVwAVm0Mp+gCe4NbKJmCQD6EHJY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: kernel_pgtables_set_pkey() allows setting the pkey of all page table pages in swapper_pg_dir, recursively. This will be needed by kpkeys_hardened_pgtables, as it relies on all PTPs being mapped with a non-default pkey. Those initial kernel page tables cannot practically be assigned a non-default pkey right when they are allocated, so mutating them during (early) boot is required. Signed-off-by: Kevin Brodsky --- include/linux/mm.h | 2 + mm/memory.c | 137 +++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 139 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index ef420f4dc72c..dd1b918dc294 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -4240,6 +4240,8 @@ int arch_get_shadow_stack_status(struct task_struct *t, unsigned long __user *st int arch_set_shadow_stack_status(struct task_struct *t, unsigned long status); int arch_lock_shadow_stack_status(struct task_struct *t, unsigned long status); +int kernel_pgtables_set_pkey(int pkey); + /* * mseal of userspace process's system mappings. diff --git a/mm/memory.c b/mm/memory.c index 2d8c265fc7d6..37c2bb35faea 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -76,6 +76,8 @@ #include #include #include +#include +#include #include @@ -7376,3 +7378,138 @@ void vma_pgtable_walk_end(struct vm_area_struct *vma) if (is_vm_hugetlb_page(vma)) hugetlb_vma_unlock_read(vma); } + +static int __init set_page_pkey(void *p, int pkey) +{ + unsigned long addr = (unsigned long)p; + + /* + * swapper_pg_dir itself will be made read-only by mark_rodata_ro() + * so there is no point in changing its pkey. + */ + if (p == swapper_pg_dir) + return 0; + + return set_memory_pkey(addr, 1, pkey); +} + +static int __init set_pkey_pte(pmd_t *pmd, int pkey) +{ + pte_t *pte; + int err; + + pte = pte_offset_kernel(pmd, 0); + err = set_page_pkey(pte, pkey); + + return err; +} + +static int __init set_pkey_pmd(pud_t *pud, int pkey) +{ + pmd_t *pmd; + int i, err = 0; + + pmd = pmd_offset(pud, 0); + + err = set_page_pkey(pmd, pkey); + if (err) + return err; + + for (i = 0; i < PTRS_PER_PMD; i++) { + if (pmd_none(pmd[i]) || pmd_bad(pmd[i]) || pmd_leaf(pmd[i])) + continue; + err = set_pkey_pte(&pmd[i], pkey); + if (err) + break; + } + + return err; +} + +static int __init set_pkey_pud(p4d_t *p4d, int pkey) +{ + pud_t *pud; + int i, err = 0; + + if (mm_pmd_folded(&init_mm)) + return set_pkey_pmd((pud_t *)p4d, pkey); + + pud = pud_offset(p4d, 0); + + err = set_page_pkey(pud, pkey); + if (err) + return err; + + for (i = 0; i < PTRS_PER_PUD; i++) { + if (pud_none(pud[i]) || pud_bad(pud[i]) || pud_leaf(pud[i])) + continue; + err = set_pkey_pmd(&pud[i], pkey); + if (err) + break; + } + + return err; +} + +static int __init set_pkey_p4d(pgd_t *pgd, int pkey) +{ + p4d_t *p4d; + int i, err = 0; + + if (mm_pud_folded(&init_mm)) + return set_pkey_pud((p4d_t *)pgd, pkey); + + p4d = p4d_offset(pgd, 0); + + err = set_page_pkey(p4d, pkey); + if (err) + return err; + + for (i = 0; i < PTRS_PER_P4D; i++) { + if (p4d_none(p4d[i]) || p4d_bad(p4d[i]) || p4d_leaf(p4d[i])) + continue; + err = set_pkey_pud(&p4d[i], pkey); + if (err) + break; + } + + return err; +} + +/** + * kernel_pgtables_set_pkey - set pkey for all kernel page table pages + * @pkey: pkey to set the page table pages to + * + * Walks swapper_pg_dir setting the protection key of every page table page (at + * all levels) to @pkey. swapper_pg_dir itself is left untouched as it is + * expected to be mapped read-only by mark_rodata_ro(). + * + * No-op if the architecture does not support kpkeys. + */ +int __init kernel_pgtables_set_pkey(int pkey) +{ + pgd_t *pgd = swapper_pg_dir; + int i, err = 0; + + if (!arch_kpkeys_enabled()) + return 0; + + spin_lock(&init_mm.page_table_lock); + + if (mm_p4d_folded(&init_mm)) { + err = set_pkey_p4d(pgd, pkey); + goto out; + } + + for (i = 0; i < PTRS_PER_PGD; i++) { + if (pgd_none(pgd[i]) || pgd_bad(pgd[i]) || pgd_leaf(pgd[i])) + continue; + err = set_pkey_p4d(&pgd[i], pkey); + if (err) + break; + } + +out: + spin_unlock(&init_mm.page_table_lock); + return err; +}