From patchwork Wed Mar 13 21:47:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13591919 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1E489C54E6A for ; Wed, 13 Mar 2024 21:48:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=aynXyY1yjiWgigAcZfUJjD6IjSsGTl5MPdxZk4MQq7Q=; b=MzwwoBeYyIKSaJ D/W7S0Bs7EdCfDpJVfEmpb7Zlrdh8QWAapz5gmTfkH2stMVjqDqNVQZ5NRN8i3GiFM+IbVtY9sFv1 4PkSr7Fu0CrUAFXY8sbLXCKEmVt4xcV6JFllUyxr/MGtjPBw6evCvNesuTG50kSEkomBVxi2yCrq9 bfcMvNjIR+ePT57Vt/OQ4PEjkClth7wbM7Sm7sHBjIOndLAN6zdPOnIvwiYMjKgjKQ5baVD0cXorP bYwx9ejFZE8QdGWIG41YsXRwjeSs178Lq/v+XKKJSQ4kDrKjNa1nNCVCQ7SRvY0//4ni/iN0lscSY /N1/kNS1B9KIklPEAcKQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rkWS8-0000000C0TB-3e0M; Wed, 13 Mar 2024 21:48:01 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rkWRe-0000000C03s-24zI for linux-arm-kernel@lists.infradead.org; Wed, 13 Mar 2024 21:47:35 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1710366449; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vtZJyICZEGrBJHpYWSu5beimQ7UmPGzVa4S5UCB/Dcs=; b=YeQc9kMIJnzFU21uNz/0XHapVQ9Ht6kAArhV6nFscCO+u4Lyrd/27NENelJ4Y9/ALASmOm mUtqIW2Q16T8MFmN3O4retQ4Kj3oRXPdyJxA3kgi+kOWh5ncoPkYkqdeNzVy7JWXF9ZpZN 7YdzHxsHD1HBpRIkZGAXFhBMPufFmPs= Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-270-lBTkHHzbMda1Hi8LwGbJVQ-1; Wed, 13 Mar 2024 17:47:28 -0400 X-MC-Unique: lBTkHHzbMda1Hi8LwGbJVQ-1 Received: by mail-qv1-f69.google.com with SMTP id 6a1803df08f44-691455cfa84so151506d6.1 for ; Wed, 13 Mar 2024 14:47:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710366447; x=1710971247; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vtZJyICZEGrBJHpYWSu5beimQ7UmPGzVa4S5UCB/Dcs=; b=iYR8My1HNOufaoPt2teOig4CLaxYZYWPbVSllJtEPAtkzYHhK60F5r7Pf2SFPeisik lp6LuQF24kIvZx+ghs208gQkg1nngITEic+pS2ufUakbvz/WpYq9ZtcEgSMA9iFY6SVJ Dsjys0WjfT0jsUvuL54s2i3W/3eGBLY29D/26A/E0lLoKDijBSY3E8jwKON5dQejkFil fXOEdGuBLccGRNPAnlFwmo6uIyC8mDPTl4uzEScLbXFc5Rt4GHnHvErGtro+XD5BHwM4 rAH39RdoFlvaqkJodmJ0vTbMPlin0JCgaD1NY4MWWAotmIErXIQs39NIB29F96rdU8zT nEJg== X-Gm-Message-State: AOJu0YyT+co2C9X2f3zQflItXD3RSGVSII+LqkTgfw81iRBAVahr80GG 2o+gLVhRymnJzE2OIc48Q39F8KY7kepVL15MtWyMJR++F0BtlMrtpymgMsLCC+sNUxw1W3IdC6v +mMw6KC8+xN8jptqxJXj7/FRvFryuKEuYIpfBkdAcNSVtS0FFq4QmrVZhGnhBLUdycP4+1drf X-Received: by 2002:a0c:f508:0:b0:690:b02c:2a5f with SMTP id j8-20020a0cf508000000b00690b02c2a5fmr13464423qvm.4.1710366447572; Wed, 13 Mar 2024 14:47:27 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHIi4iLkgKJ4BRvvEEi/kkT0r+z7XC1hachuRmolyjwCQk4v8XzYiXmWJ7FNVheaziOarzFKQ== X-Received: by 2002:a0c:f508:0:b0:690:b02c:2a5f with SMTP id j8-20020a0cf508000000b00690b02c2a5fmr13464414qvm.4.1710366447219; Wed, 13 Mar 2024 14:47:27 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id dv10-20020ad44eea000000b0069111c5cdd4sm60114qvb.100.2024.03.13.14.47.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Mar 2024 14:47:26 -0700 (PDT) From: peterx@redhat.com To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: linux-arm-kernel@lists.infradead.org, Matthew Wilcox , linuxppc-dev@lists.ozlabs.org, Christophe Leroy , Andrew Morton , x86@kernel.org, peterx@redhat.com, Mike Rapoport , Muchun Song , sparclinux@vger.kernel.org, Jason Gunthorpe , Naoya Horiguchi , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen Subject: [PATCH 04/13] mm/x86: Change pXd_huge() behavior to exclude swap entries Date: Wed, 13 Mar 2024 17:47:10 -0400 Message-ID: <20240313214719.253873-5-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240313214719.253873-1-peterx@redhat.com> References: <20240313214719.253873-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240313_144730_765462_31161602 X-CRM114-Status: GOOD ( 18.89 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Peter Xu This patch partly reverts below commits: 3a194f3f8ad0 ("mm/hugetlb: make pud_huge() and follow_huge_pud() aware of non-present pud entry") cbef8478bee5 ("mm/hugetlb: pmd_huge() returns true for non-present hugepage") Right now, pXd_huge() definition across kernel is unclear. We have two groups that think differently on swap entries: - x86/sparc: Allow pXd_huge() to accept swap entries - all the rest: Doesn't allow pXd_huge() to accept swap entries This is so confusing. Since the sparc helpers seem to be added in 2016, which is after x86's (2015), so sparc could have followed a trend. x86 proposed such swap handling in 2015 to resolve hugetlb swap entries hit in GUP, but now GUP guards swap entries with !pXd_present() in all layers so we should be safe. We should define this API properly, one way or another, rather than keep them defined differently across archs. Gut feeling tells me that pXd_huge() shouldn't include swap entries, and it turns out that I am not the only one thinking so, the question was raised when the current pmd_huge() for x86 was proposed by Ville Syrjälä: https://lore.kernel.org/all/Y2WQ7I4LXh8iUIRd@intel.com/ I might also be missing something obvious, but why is it even necessary to treat PRESENT==0+PSE==0 as a huge entry? It is also questioned when Jason Gunthorpe reviewed the other patchset on swap entry handlings: https://lore.kernel.org/all/20240221125753.GQ13330@nvidia.com/ Revert its meaning back to original. It shouldn't have any functional change as we should be ready with guards on !pXd_present() explicitly everywhere. Note that I also dropped the "#if CONFIG_PGTABLE_LEVELS > 2", it was there probably because it was breaking things when 3a194f3f8ad0 was proposed, according to the report here: https://lore.kernel.org/all/Y2LYXItKQyaJTv8j@intel.com/ Now we shouldn't need that. Instead of reverting to _PAGE_PSE raw check, leverage pXd_leaf(). Cc: Naoya Horiguchi Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: x86@kernel.org Signed-off-by: Peter Xu --- arch/x86/mm/hugetlbpage.c | 18 ++++-------------- 1 file changed, 4 insertions(+), 14 deletions(-) diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c index 5804bbae4f01..8362953a24ce 100644 --- a/arch/x86/mm/hugetlbpage.c +++ b/arch/x86/mm/hugetlbpage.c @@ -20,29 +20,19 @@ #include /* - * pmd_huge() returns 1 if @pmd is hugetlb related entry, that is normal - * hugetlb entry or non-present (migration or hwpoisoned) hugetlb entry. - * Otherwise, returns 0. + * pmd_huge() returns 1 if @pmd is hugetlb related entry. */ int pmd_huge(pmd_t pmd) { - return !pmd_none(pmd) && - (pmd_val(pmd) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT; + return pmd_leaf(pmd); } /* - * pud_huge() returns 1 if @pud is hugetlb related entry, that is normal - * hugetlb entry or non-present (migration or hwpoisoned) hugetlb entry. - * Otherwise, returns 0. + * pud_huge() returns 1 if @pud is hugetlb related entry. */ int pud_huge(pud_t pud) { -#if CONFIG_PGTABLE_LEVELS > 2 - return !pud_none(pud) && - (pud_val(pud) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT; -#else - return 0; -#endif + return pud_leaf(pud); } #ifdef CONFIG_HUGETLB_PAGE