From patchwork Mon Mar 18 20:03:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13595780 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3731FC54E69 for ; Mon, 18 Mar 2024 20:05:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=123TQ/RCqA1ukHAuySfJmY8HWFAFRArZr/m8trieBEM=; b=yr7a7uz+Mg5itd sbuM1XNNVkfBRAkZZNe/lVutspbtFl0GJLOrsLQ1u+5ifqRbz9DXhFwSHltsNxRBWaPkfh+F50QzD ruiyAEO4kUpoekwYJXLrIAlXAe8OuyDgDmbjyjmg7hnUGPmoSyLujGaKO4BJ/tZHw850hdAMOdRjw DFEI8c6Vp/5INtAVL299mt8Uxl2jT5GdDn8QagdqBQYxeHgNdJrEz0edHGME3gN91swCT7Yv2MaFW 7cG+qTJMUwm2fOanVl7JDXdOC//7c6s4lXJ1lW41hvSkgD7FvOnqjZgulANmcKoeVIlDRO57FQf/1 rHolQxoWYDB57xF5CJ/w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rmJE4-00000009xWe-07kp; Mon, 18 Mar 2024 20:04:52 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rmJDa-00000009x5O-0jL9 for linux-arm-kernel@lists.infradead.org; Mon, 18 Mar 2024 20:04:26 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1710792261; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xZurg04juSYF/uW9HtnhgkfMM1C/gArxmrpoVNEgvCQ=; b=O3WmJ+n7XNNnxgnQ25kBqfo+fbeEk/1Y1UiHc/I+r1s+Jnm+U+jSTPJ08K6OeOSQz9nJQ2 FWBtcUPwIt5fVkYDpa+8pXRHJkFZDyV+JIRXsdW+4lR6GGV4VMdIwY6LuJNVvNUKn3eYrP RpTRI2839yVkqxy7+KfOsholIY7pkxg= Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-590-q8itZ9TRPxCKBdHZRK1abA-1; Mon, 18 Mar 2024 16:04:20 -0400 X-MC-Unique: q8itZ9TRPxCKBdHZRK1abA-1 Received: by mail-qv1-f70.google.com with SMTP id 6a1803df08f44-69627b26a51so2058736d6.1 for ; Mon, 18 Mar 2024 13:04:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710792258; x=1711397058; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xZurg04juSYF/uW9HtnhgkfMM1C/gArxmrpoVNEgvCQ=; b=gvlo3dtIybFgNNHXx2lVQw2G1hSKKaDDKGbrIYVBBZJrmnHTHJOgkNNMFRzofCY/Lw nFfGKyFhS9JX/LM8f7SHG+pQtYlqPtfFMUhVZveD49XRso13EKDEYTN0sjkd7eVqg/t3 45rFk+21GZoGBD2QIfTKEqPdUvD78kXWI7uFNBwv0x1yDQU6gbK91hUUkbYfYZBHMnM3 AXnQa6OkwIlE+goejRyoctUzncRqHBIRMML/Mh09xtQuewL2IL6BhSnnULVLwJbab8E9 EVMDdyY2We8HrigbgDznCgDOxXKBRqtuSYQi7WA/wvg3XdQaSwhSzR88LVxvdmsaFTZg lbIw== X-Forwarded-Encrypted: i=1; AJvYcCUTH7+PYQlPoyhtzX37nyl8KLSbRfvVKIWHl2VCJSN5JHRTQARNoNEdff3gV2V7gXH2hscfKkshZgKP4XD+Q4rFe05/sDVUdHO8WIcxSbPXWufrg5k= X-Gm-Message-State: AOJu0YwxTcH2uIgsmROEMHpD3x9DlZplAOFNjKh2kXLFofYZaBRRSVos DDvZNAS6Ct+qlUZSbwar1/ZSGH6oig8hO7OFbr4VKlUAQ+hXDywQWYS88zHTDV/6m6E24W5muXi 0+NSJ5mBV6B23Y8prTpRATN/qEq6F1lD5ouItbAMMz73Sr/HQd3F7Vd0wx2Vwl5353dm0vL8X X-Received: by 2002:a05:6214:3f85:b0:690:9db6:f410 with SMTP id ow5-20020a0562143f8500b006909db6f410mr462675qvb.3.1710792258412; Mon, 18 Mar 2024 13:04:18 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGPMjFRvOQnrQWtPInqAdggXMmLflMtETkazIbCBO67t5p2Mbipn5U9tbb0PWyioZzi1AzvbQ== X-Received: by 2002:a05:6214:3f85:b0:690:9db6:f410 with SMTP id ow5-20020a0562143f8500b006909db6f410mr462644qvb.3.1710792258055; Mon, 18 Mar 2024 13:04:18 -0700 (PDT) Received: from x1n.. ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id hj10-20020a05622a620a00b0042ebbc1196fsm3484491qtb.87.2024.03.18.13.04.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Mar 2024 13:04:17 -0700 (PDT) From: peterx@redhat.com To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Andrew Morton , x86@kernel.org, Muchun Song , Mike Rapoport , Matthew Wilcox , sparclinux@vger.kernel.org, Jason Gunthorpe , linuxppc-dev@lists.ozlabs.org, Christophe Leroy , linux-arm-kernel@lists.infradead.org, peterx@redhat.com, Mark Salter , Catalin Marinas , Will Deacon Subject: [PATCH v2 08/14] mm/arm64: Merge pXd_huge() and pXd_leaf() definitions Date: Mon, 18 Mar 2024 16:03:58 -0400 Message-ID: <20240318200404.448346-9-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240318200404.448346-1-peterx@redhat.com> References: <20240318200404.448346-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240318_130422_465430_8E4A5F0E X-CRM114-Status: GOOD ( 14.41 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Peter Xu Unlike most archs, aarch64 defines pXd_huge() and pXd_leaf() slightly differently. Redefine the pXd_huge() with pXd_leaf(). There used to be two traps for old aarch64 definitions over these APIs that I found when reading the code around, they're: (1) 4797ec2dc83a ("arm64: fix pud_huge() for 2-level pagetables") (2) 23bc8f69f0ec ("arm64: mm: fix p?d_leaf()") Define pXd_huge() with the current pXd_leaf() will make sure (2) isn't a problem (on PROT_NONE checks). To make sure it also works for (1), we move over the __PAGETABLE_PMD_FOLDED check to pud_leaf(), allowing it to constantly returning "false" for 2-level pgtables, which looks even safer to cover both now. Cc: Muchun Song Cc: Mark Salter Cc: Catalin Marinas Cc: Will Deacon Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Peter Xu --- arch/arm64/include/asm/pgtable.h | 4 ++++ arch/arm64/mm/hugetlbpage.c | 8 ++------ 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 401087e8a43d..14d24c357c7a 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -704,7 +704,11 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd) #define pud_none(pud) (!pud_val(pud)) #define pud_bad(pud) (!pud_table(pud)) #define pud_present(pud) pte_present(pud_pte(pud)) +#ifndef __PAGETABLE_PMD_FOLDED #define pud_leaf(pud) (pud_present(pud) && !pud_table(pud)) +#else +#define pud_leaf(pud) false +#endif #define pud_valid(pud) pte_valid(pud_pte(pud)) #define pud_user(pud) pte_user(pud_pte(pud)) #define pud_user_exec(pud) pte_user_exec(pud_pte(pud)) diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index 0f0e10bb0a95..1234bbaef5bf 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -81,16 +81,12 @@ bool arch_hugetlb_migration_supported(struct hstate *h) int pmd_huge(pmd_t pmd) { - return pmd_val(pmd) && !(pmd_val(pmd) & PMD_TABLE_BIT); + return pmd_leaf(pmd); } int pud_huge(pud_t pud) { -#ifndef __PAGETABLE_PMD_FOLDED - return pud_val(pud) && !(pud_val(pud) & PUD_TABLE_BIT); -#else - return 0; -#endif + return pud_leaf(pud); } static int find_num_contig(struct mm_struct *mm, unsigned long addr,