From patchwork Fri Aug 9 16:09:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13759001 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B007DC3DA4A for ; Fri, 9 Aug 2024 16:19:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type: Content-Transfer-Encoding:MIME-Version:References:In-Reply-To:Message-ID:Date :Subject:Cc:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=YoGoBxg1RIhRUWN8551lXVuosUhfDUfYXVK/Rz8tB4Q=; b=v1f0twkBczIKvGCazafyHcqE6T t1oJj62C9ewbRmzl637z250CugpP5ogjt3rNXueqV99RFDqYTdz/QhSyjbTrB5mkEY+NJZFANbVpi h04NqMMiE+MdeJw2D92losyMc7QjMJAxOYRbKoyyGD2biOhVaKXUCtm0kVJRNcOv/PNt6rokEBsoU IQ1CKEVnGdVgStKmF6IH0cBGJQsFSUHY/xTUeTtp/vUJpopugR99XV4baH7J/Ihiz4MOP4AXgf/A6 YVbYtRl7JHphAqK5dzt9BW6qrq3TdXkIKogt0rUzS0nu5FJw8+f91wS3ZMxM3nYwCAf5QQAQiy5Xd x0731c5w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1scSLF-0000000Bsnx-0f0X; Fri, 09 Aug 2024 16:19:49 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1scSBf-0000000Bpeu-2w5p for linux-arm-kernel@lists.infradead.org; Fri, 09 Aug 2024 16:09:58 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723219794; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YoGoBxg1RIhRUWN8551lXVuosUhfDUfYXVK/Rz8tB4Q=; b=YLFBFgxFqOjvnP9gVdywNLyR/NfA2e7MRz0havdBRWuD7YRvw5u9AwZG6PX1jDJvp9PzVe kUiylYybluMpABAnqMdVsjfhAySJE044Aqr5/Sf1yrqy9GCXCE25u61ezo8WlogH2cV8gt k5Esc5s/0+xyjyfO93s5aG8jKvuDbds= Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com [209.85.160.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-678-Eep06HT7O9mpa8WGEy_LuQ-1; Fri, 09 Aug 2024 12:09:53 -0400 X-MC-Unique: Eep06HT7O9mpa8WGEy_LuQ-1 Received: by mail-qt1-f197.google.com with SMTP id d75a77b69052e-45009e27b83so4906291cf.1 for ; Fri, 09 Aug 2024 09:09:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1723219793; x=1723824593; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YoGoBxg1RIhRUWN8551lXVuosUhfDUfYXVK/Rz8tB4Q=; b=Cv8R5kXRNPxLYITaGTt7K2c1M1wz5sR9Ll8g4WG6ySM7e2Bg7lYBQFF6HwZVTz6WY6 yeFw7YX7uAXMvXJ1v2n2K6QXQ0g40phH7gQjSaa50FWgRfi5mGpnaOc3r7XsNepZQD+8 fCyKKpugX/gSzgBtgneScyTOm23oqRiav9r/0WpUUSySbrqLXP9fAqenYJ+oyxWA9PeD cQS5SoBA3F4YgEdy8cKj02xyAgjLTCvqnhrzsyQv/hE8U/arDExjAevjENozl6Z0f8xK Yh+qRhzKEmNEbiXj97wx/Vl8zODpLgp+br9M5pzvdPVLachWYu17AffGgjUIKJ58/6MM nYWQ== X-Forwarded-Encrypted: i=1; AJvYcCXKJpjOgqdDJNLUXaB29ZgKq7v5jTsbrJPRR2Pt5Zm8JSvt7ePlgoV5+3VGPIV6KUOzidfkDdS0EZRaoReg+PpnwO/QPLVxH9L1cMXOlCmZhXAcLgA= X-Gm-Message-State: AOJu0YwhuHQWYHFZ2Zwo+upVyFj5uI8YsfajuzGEKplLMKwawIAIQCOZ uheHaWI7Rz5k3iYmAqJgW+coqAJZyz69KPbfe0qpZtJEEP6R7xDrsulpxRnYfkuzJlUy1z1w23E 7b6ryl+ICPujEPyq463H8Bom1J0o2oYsXA1D2tXa9TT0Vj5T4L5WDj7qERE4Mu0XR6VHxJCzT X-Received: by 2002:ac8:5e07:0:b0:44e:cff7:3741 with SMTP id d75a77b69052e-45312646da2mr14005691cf.7.1723219793235; Fri, 09 Aug 2024 09:09:53 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHyC9JbjVCeQC7LVvckqh3sKVh2QUlZ0IzLJIVdarr6jR5vj53aOO53iHwQIsrAsjgq+0MAjA== X-Received: by 2002:ac8:5e07:0:b0:44e:cff7:3741 with SMTP id d75a77b69052e-45312646da2mr14005321cf.7.1723219792685; Fri, 09 Aug 2024 09:09:52 -0700 (PDT) Received: from x1n.redhat.com (pool-99-254-121-117.cpe.net.cable.rogers.com. [99.254.121.117]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-451c870016csm22526741cf.19.2024.08.09.09.09.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Aug 2024 09:09:51 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Sean Christopherson , Oscar Salvador , Jason Gunthorpe , Axel Rasmussen , linux-arm-kernel@lists.infradead.org, x86@kernel.org, peterx@redhat.com, Will Deacon , Gavin Shan , Paolo Bonzini , Zi Yan , Andrew Morton , Catalin Marinas , Ingo Molnar , Alistair Popple , Borislav Petkov , David Hildenbrand , Thomas Gleixner , kvm@vger.kernel.org, Dave Hansen , Alex Williamson , Yan Zhao Subject: [PATCH 17/19] mm/x86: Support large pfn mappings Date: Fri, 9 Aug 2024 12:09:07 -0400 Message-ID: <20240809160909.1023470-18-peterx@redhat.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20240809160909.1023470-1-peterx@redhat.com> References: <20240809160909.1023470-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240809_090955_921649_E8CA4838 X-CRM114-Status: GOOD ( 17.37 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Helpers to install and detect special pmd/pud entries. In short, bit 9 on x86 is not used for pmd/pud, so we can directly define them the same as the pte level. One note is that it's also used in _PAGE_BIT_CPA_TEST but that is only used in the debug test, and shouldn't conflict in this case. One note is that pxx_set|clear_flags() for pmd/pud will need to be moved upper so that they can be referenced by the new special bit helpers. There's no change in the code that was moved. Cc: x86@kernel.org Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Signed-off-by: Peter Xu --- arch/x86/Kconfig | 1 + arch/x86/include/asm/pgtable.h | 80 ++++++++++++++++++++++------------ 2 files changed, 53 insertions(+), 28 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index acd9745bf2ae..7a3fb2ff3e72 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -28,6 +28,7 @@ config X86_64 select ARCH_HAS_GIGANTIC_PAGE select ARCH_SUPPORTS_INT128 if CC_HAS_INT128 select ARCH_SUPPORTS_PER_VMA_LOCK + select ARCH_SUPPORTS_HUGE_PFNMAP if TRANSPARENT_HUGEPAGE select HAVE_ARCH_SOFT_DIRTY select MODULES_USE_ELF_RELA select NEED_DMA_MAP_STATE diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index a7c1e9cfea41..1e463c9a650f 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -120,6 +120,34 @@ extern pmdval_t early_pmd_flags; #define arch_end_context_switch(prev) do {} while(0) #endif /* CONFIG_PARAVIRT_XXL */ +static inline pmd_t pmd_set_flags(pmd_t pmd, pmdval_t set) +{ + pmdval_t v = native_pmd_val(pmd); + + return native_make_pmd(v | set); +} + +static inline pmd_t pmd_clear_flags(pmd_t pmd, pmdval_t clear) +{ + pmdval_t v = native_pmd_val(pmd); + + return native_make_pmd(v & ~clear); +} + +static inline pud_t pud_set_flags(pud_t pud, pudval_t set) +{ + pudval_t v = native_pud_val(pud); + + return native_make_pud(v | set); +} + +static inline pud_t pud_clear_flags(pud_t pud, pudval_t clear) +{ + pudval_t v = native_pud_val(pud); + + return native_make_pud(v & ~clear); +} + /* * The following only work if pte_present() is true. * Undefined behaviour if not.. @@ -317,6 +345,30 @@ static inline int pud_devmap(pud_t pud) } #endif +#ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP +static inline bool pmd_special(pmd_t pmd) +{ + return pmd_flags(pmd) & _PAGE_SPECIAL; +} + +static inline pmd_t pmd_mkspecial(pmd_t pmd) +{ + return pmd_set_flags(pmd, _PAGE_SPECIAL); +} +#endif /* CONFIG_ARCH_SUPPORTS_PMD_PFNMAP */ + +#ifdef CONFIG_ARCH_SUPPORTS_PUD_PFNMAP +static inline bool pud_special(pud_t pud) +{ + return pud_flags(pud) & _PAGE_SPECIAL; +} + +static inline pud_t pud_mkspecial(pud_t pud) +{ + return pud_set_flags(pud, _PAGE_SPECIAL); +} +#endif /* CONFIG_ARCH_SUPPORTS_PUD_PFNMAP */ + static inline int pgd_devmap(pgd_t pgd) { return 0; @@ -487,20 +539,6 @@ static inline pte_t pte_mkdevmap(pte_t pte) return pte_set_flags(pte, _PAGE_SPECIAL|_PAGE_DEVMAP); } -static inline pmd_t pmd_set_flags(pmd_t pmd, pmdval_t set) -{ - pmdval_t v = native_pmd_val(pmd); - - return native_make_pmd(v | set); -} - -static inline pmd_t pmd_clear_flags(pmd_t pmd, pmdval_t clear) -{ - pmdval_t v = native_pmd_val(pmd); - - return native_make_pmd(v & ~clear); -} - /* See comments above mksaveddirty_shift() */ static inline pmd_t pmd_mksaveddirty(pmd_t pmd) { @@ -595,20 +633,6 @@ static inline pmd_t pmd_mkwrite_novma(pmd_t pmd) pmd_t pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma); #define pmd_mkwrite pmd_mkwrite -static inline pud_t pud_set_flags(pud_t pud, pudval_t set) -{ - pudval_t v = native_pud_val(pud); - - return native_make_pud(v | set); -} - -static inline pud_t pud_clear_flags(pud_t pud, pudval_t clear) -{ - pudval_t v = native_pud_val(pud); - - return native_make_pud(v & ~clear); -} - /* See comments above mksaveddirty_shift() */ static inline pud_t pud_mksaveddirty(pud_t pud) {