From patchwork Fri Aug 9 16:09:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13759002 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A3059C52D7C for ; Fri, 9 Aug 2024 16:20:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type: Content-Transfer-Encoding:MIME-Version:References:In-Reply-To:Message-ID:Date :Subject:Cc:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=UM853E440qfXKYz7avWh673RydKsfl7XbVKvi0Id2+c=; b=vcVNn0/to5tbioyAEpFiwB7dHN 930/G86iAQuEC54u6cdWEtSPEqw/PKi99h/7q6H8QVqspjkdhAMopbUvqFv8ddLARfLfiQi22nkR+ eXkdA3+0+1//I3l8xJl4oUxgMUToe+TVmYD8grqPnrgRT+qO/BqreiJvE0+RpTIhYLkfRBa4GxGB/ JgQ887Xep5fYbxYuGk8re2OYLfqgeF81cvBTkZrwQXlK7CfU6K5dSwvi9JX4DPg5a2EJXR08i7BYT XbMTLog21rS2xkALbuaEd5DFdCjGjKcAAX/u8YabOkwMTZmtM+uB512sqnixoll9WFHGXQFldO87q /gHd2L0Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1scSLm-0000000Bsvh-2sOz; Fri, 09 Aug 2024 16:20:22 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1scSBh-0000000BpgK-31N7 for linux-arm-kernel@lists.infradead.org; Fri, 09 Aug 2024 16:09:59 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723219796; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UM853E440qfXKYz7avWh673RydKsfl7XbVKvi0Id2+c=; b=awElHe3ixtN9NVlv03OfaUPbqTJSdnQxIG2NwfT5iCUNwWrDouXuLtg/EMvqBfNSLtCKgG IoDJmFN310WvvGM6p6lGhY2G8THLBZSXm1aUjgAeO1pSsQXOrkaUTBYLOeJjGqH/ThdE8e hnNcFrCSfb9ww1dO+ZpqnwvyPtEp5zk= Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com [209.85.160.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-37-m5om8dLsO9-hJDc6ocr2vw-1; Fri, 09 Aug 2024 12:09:55 -0400 X-MC-Unique: m5om8dLsO9-hJDc6ocr2vw-1 Received: by mail-qt1-f197.google.com with SMTP id d75a77b69052e-45009e27b83so4906341cf.1 for ; Fri, 09 Aug 2024 09:09:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1723219795; x=1723824595; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UM853E440qfXKYz7avWh673RydKsfl7XbVKvi0Id2+c=; b=s/QV9uj76cBR9bQrbpnkhih3ir+6qnOgEmeDr9+qB6nOqYLfC0nSW36WWGl0nHIql0 FsVEt71X3ojiHLlEqZpObRubmmX9V5kK7WCfdN+Dl8NmFunvLrDilArVLo5M64QBV9OD Dod/UHBRxbvOaKAxtdKQL9Bbg0G/p8DjKlwAf6hK1l2FJ9XHcSqOWkCUffj6CnJ+QOYm NSmIowQeV9jlvnu+RAFtzvhJdzfcqeo3lx4YmNK2jpBFUbFBvoCczzZQPH4gHnbDEjaJ sIc3VJ4WCMiITykXOowhyBvaNqhjrfj9Ggg8LR7UKBqcnoSdcoG01w8UwfVcCcx4XaYm EFmg== X-Forwarded-Encrypted: i=1; AJvYcCXEhn3qqv01zS63BWXjEZSD8V8tm9cKDE14drjgZm54ENL9Heo/utk/qftkgMnUhkEQubRkzVmj1teVG/tWyISEfiNsjEcpRUgWB8A56EBDtHcxoCE= X-Gm-Message-State: AOJu0YwfxnQHvbJnghuN61gOVgbK98MNyNgPncMkC/DxkEzbw8WbPIg7 E+rsFa640CmC0TpWZTf8/w7LcmSzXHNa4YYtg5f99OtqsYC8djMNhpyS9RqowHK1by2pDzYMWhv xXn2GxthuWKm413pOEDxoTleFwZ89rhjhia9rAHAqCgsnLaSxDptkmioU6RaBlErc3rt/JQr3 X-Received: by 2002:ac8:5e07:0:b0:44e:cff7:3741 with SMTP id d75a77b69052e-45312646da2mr14006691cf.7.1723219795212; Fri, 09 Aug 2024 09:09:55 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEzDYjFq+v9SYAdrw2G1PaR/i1nnMK5UGwVdyzJwk9mK3TcvaabWRsZT43a6L+rPfA/Jc6OAg== X-Received: by 2002:ac8:5e07:0:b0:44e:cff7:3741 with SMTP id d75a77b69052e-45312646da2mr14006381cf.7.1723219794745; Fri, 09 Aug 2024 09:09:54 -0700 (PDT) Received: from x1n.redhat.com (pool-99-254-121-117.cpe.net.cable.rogers.com. [99.254.121.117]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-451c870016csm22526741cf.19.2024.08.09.09.09.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Aug 2024 09:09:54 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Sean Christopherson , Oscar Salvador , Jason Gunthorpe , Axel Rasmussen , linux-arm-kernel@lists.infradead.org, x86@kernel.org, peterx@redhat.com, Will Deacon , Gavin Shan , Paolo Bonzini , Zi Yan , Andrew Morton , Catalin Marinas , Ingo Molnar , Alistair Popple , Borislav Petkov , David Hildenbrand , Thomas Gleixner , kvm@vger.kernel.org, Dave Hansen , Alex Williamson , Yan Zhao Subject: [PATCH 18/19] mm/arm64: Support large pfn mappings Date: Fri, 9 Aug 2024 12:09:08 -0400 Message-ID: <20240809160909.1023470-19-peterx@redhat.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20240809160909.1023470-1-peterx@redhat.com> References: <20240809160909.1023470-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240809_090957_903398_C4802558 X-CRM114-Status: GOOD ( 13.74 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Support huge pfnmaps by using bit 56 (PTE_SPECIAL) for "special" on pmds/puds. Provide the pmd/pud helpers to set/get special bit. There's one more thing missing for arm64 which is the pxx_pgprot() for pmd/pud. Add them too, which is mostly the same as the pte version by dropping the pfn field. These helpers are essential to be used in the new follow_pfnmap*() API to report valid pgprot_t results. Note that arm64 doesn't yet support huge PUD yet, but it's still straightforward to provide the pud helpers that we need altogether. Only PMD helpers will make an immediate benefit until arm64 will support huge PUDs first in general (e.g. in THPs). Cc: linux-arm-kernel@lists.infradead.org Cc: Catalin Marinas Cc: Will Deacon Signed-off-by: Peter Xu --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/pgtable.h | 29 +++++++++++++++++++++++++++++ 2 files changed, 30 insertions(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index b3fc891f1544..5f026b95f309 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -99,6 +99,7 @@ config ARM64 select ARCH_SUPPORTS_NUMA_BALANCING select ARCH_SUPPORTS_PAGE_TABLE_CHECK select ARCH_SUPPORTS_PER_VMA_LOCK + select ARCH_SUPPORTS_HUGE_PFNMAP if TRANSPARENT_HUGEPAGE select ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH select ARCH_WANT_COMPAT_IPC_PARSE_VERSION if COMPAT select ARCH_WANT_DEFAULT_BPF_JIT diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index b78cc4a6758b..2faecc033a19 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -578,6 +578,14 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd) return pte_pmd(set_pte_bit(pmd_pte(pmd), __pgprot(PTE_DEVMAP))); } +#ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP +#define pmd_special(pte) (!!((pmd_val(pte) & PTE_SPECIAL))) +static inline pmd_t pmd_mkspecial(pmd_t pmd) +{ + return set_pmd_bit(pmd, __pgprot(PTE_SPECIAL)); +} +#endif + #define __pmd_to_phys(pmd) __pte_to_phys(pmd_pte(pmd)) #define __phys_to_pmd_val(phys) __phys_to_pte_val(phys) #define pmd_pfn(pmd) ((__pmd_to_phys(pmd) & PMD_MASK) >> PAGE_SHIFT) @@ -595,6 +603,27 @@ static inline pmd_t pmd_mkdevmap(pmd_t pmd) #define pud_pfn(pud) ((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT) #define pfn_pud(pfn,prot) __pud(__phys_to_pud_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot)) +#ifdef CONFIG_ARCH_SUPPORTS_PUD_PFNMAP +#define pud_special(pte) pte_special(pud_pte(pud)) +#define pud_mkspecial(pte) pte_pud(pte_mkspecial(pud_pte(pud))) +#endif + +#define pmd_pgprot pmd_pgprot +static inline pgprot_t pmd_pgprot(pmd_t pmd) +{ + unsigned long pfn = pmd_pfn(pmd); + + return __pgprot(pmd_val(pfn_pmd(pfn, __pgprot(0))) ^ pmd_val(pmd)); +} + +#define pud_pgprot pud_pgprot +static inline pgprot_t pud_pgprot(pud_t pud) +{ + unsigned long pfn = pud_pfn(pud); + + return __pgprot(pud_val(pfn_pud(pfn, __pgprot(0))) ^ pud_val(pud)); +} + static inline void __set_pte_at(struct mm_struct *mm, unsigned long __always_unused addr, pte_t *ptep, pte_t pte, unsigned int nr)