From patchwork Mon Mar 18 20:03:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13595776 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9F18DC54E69 for ; Mon, 18 Mar 2024 20:04:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=aynXyY1yjiWgigAcZfUJjD6IjSsGTl5MPdxZk4MQq7Q=; b=JKclD66XgFw3zH EINcVxBt+zzzCptwOj2nin/KVZyBFxuya28XDsWlEKSWdNDTfdjW7LFChOsY5CZ3SzeYTpNJLRvKW dBN4hhGVINwBnkdQqy8ScR5B27DdW2aZfYuh8lOkZVEy3tkyPc3yFp6RsS4ckvsZ8aUhiHAgnvCI8 mDULs0j4joJyQ7xOsNozCSo2G+Csm942CBH3LT68KJraxlalkJMJWlILtfXoOejiJ/SIojY1qkvEM nLkr5pFA9L649J51bK7900kyurCqEn+javMR32M9Icgj2McUkx5Czjsn2ikGAchAVLwlLCaPeSkcR ezQMQUtxKtBsl74Hm3LA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rmJDs-00000009xMc-3j5j; Mon, 18 Mar 2024 20:04:40 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rmJDT-00000009wyd-0hln for linux-arm-kernel@lists.infradead.org; Mon, 18 Mar 2024 20:04:17 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1710792254; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vtZJyICZEGrBJHpYWSu5beimQ7UmPGzVa4S5UCB/Dcs=; b=RghpMXmcmYmGS250nfwprZ5nJJJW+A0lma9Zlm8Wkgxz67MnsHnDEQA5DCMVZAJOSn/5ru 2RWU/aoearGtLy1uLhnWHbO1FOgdYkAsU0ke1dTbn1a2Efh2uHpKd7BNiInMq7HVYaLRwp jzcZz1e0Db7llGdZGFFMxEI5T0x2sdk= Received: from mail-qt1-f200.google.com (mail-qt1-f200.google.com [209.85.160.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-138-VWAvhicHMt2aKJsslkrGqA-1; Mon, 18 Mar 2024 16:04:13 -0400 X-MC-Unique: VWAvhicHMt2aKJsslkrGqA-1 Received: by mail-qt1-f200.google.com with SMTP id d75a77b69052e-430b4572a97so17523011cf.1 for ; Mon, 18 Mar 2024 13:04:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710792252; x=1711397052; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vtZJyICZEGrBJHpYWSu5beimQ7UmPGzVa4S5UCB/Dcs=; b=NleBqg/lIdrUs/2Ey02FNbejPKqedaitKzxFkIuPCNJvdFZefLE85n8h/j2LEmzFTk si3AZc1Ey2cFR4OQW8Iiluq2a78c/BbVjiAgtddGcGFLsNbp/CJizsYEPqJ5bP/fEXZc MzjEbQQsk+PlZ5On0S28yGHdDf0Pw6kWxH0XsRY1D5GTBOCp5PoiV8G76lfclvcK1b9Q gk9GtZgK+qT9OWgtjFtfpNqZcu5LFAAQhX5owC78yEa0djTzgSb6h8Ez6U5W9p0VZVJr /SbySYNaCbrFJ6pDfr6wOuoqp3FPOhn9o6dUwJjn1shLm76vST0YAE6piJ6RWrFp8ZXX T2Vw== X-Forwarded-Encrypted: i=1; AJvYcCUwTAs3027fQgtbO2EJ0gPS51KXG+4JDjOzgpTt+QTgcSw7Kdp3Id8BRU38jZCyk/nFc9OhwKgg75Sxn56N2l6j3yL+j6VwDLBQETNUWRoELwNUzR4= X-Gm-Message-State: AOJu0YyhzynTKbD4bg/5Zob6ZoTjF6uzynlJ0ycTpruk2UfP5yhz9Gtl adhFFblXSc2NwX3M4GQ4lgqWVj8pkE0bZ/YHCvC4xJFYcOXPi5Qf/0NTqD/WKNK1JscHTZHQd8V gCI1RWmO13uIji7qR8hONRUn7ANG2LR3yib2L1BZr4IVNZUCl2BWfjXtfcyyQCGkivN2mR1gS X-Received: by 2002:a05:622a:134e:b0:430:c82f:b9ab with SMTP id w14-20020a05622a134e00b00430c82fb9abmr6625412qtk.0.1710792252327; Mon, 18 Mar 2024 13:04:12 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFopBSFWwBK3v15ombZ7ZQDo2UFCgzW4nradhEvm8gSdWhrm9dq3oWG6x2VBEnxvd6M1ShIXA== X-Received: by 2002:a05:622a:134e:b0:430:c82f:b9ab with SMTP id w14-20020a05622a134e00b00430c82fb9abmr6625362qtk.0.1710792251793; Mon, 18 Mar 2024 13:04:11 -0700 (PDT) Received: from x1n.. ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id hj10-20020a05622a620a00b0042ebbc1196fsm3484491qtb.87.2024.03.18.13.04.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Mar 2024 13:04:11 -0700 (PDT) From: peterx@redhat.com To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Andrew Morton , x86@kernel.org, Muchun Song , Mike Rapoport , Matthew Wilcox , sparclinux@vger.kernel.org, Jason Gunthorpe , linuxppc-dev@lists.ozlabs.org, Christophe Leroy , linux-arm-kernel@lists.infradead.org, peterx@redhat.com, Naoya Horiguchi , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen Subject: [PATCH v2 04/14] mm/x86: Change pXd_huge() behavior to exclude swap entries Date: Mon, 18 Mar 2024 16:03:54 -0400 Message-ID: <20240318200404.448346-5-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240318200404.448346-1-peterx@redhat.com> References: <20240318200404.448346-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240318_130415_294506_D3A3791D X-CRM114-Status: GOOD ( 19.33 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Peter Xu This patch partly reverts below commits: 3a194f3f8ad0 ("mm/hugetlb: make pud_huge() and follow_huge_pud() aware of non-present pud entry") cbef8478bee5 ("mm/hugetlb: pmd_huge() returns true for non-present hugepage") Right now, pXd_huge() definition across kernel is unclear. We have two groups that think differently on swap entries: - x86/sparc: Allow pXd_huge() to accept swap entries - all the rest: Doesn't allow pXd_huge() to accept swap entries This is so confusing. Since the sparc helpers seem to be added in 2016, which is after x86's (2015), so sparc could have followed a trend. x86 proposed such swap handling in 2015 to resolve hugetlb swap entries hit in GUP, but now GUP guards swap entries with !pXd_present() in all layers so we should be safe. We should define this API properly, one way or another, rather than keep them defined differently across archs. Gut feeling tells me that pXd_huge() shouldn't include swap entries, and it turns out that I am not the only one thinking so, the question was raised when the current pmd_huge() for x86 was proposed by Ville Syrjälä: https://lore.kernel.org/all/Y2WQ7I4LXh8iUIRd@intel.com/ I might also be missing something obvious, but why is it even necessary to treat PRESENT==0+PSE==0 as a huge entry? It is also questioned when Jason Gunthorpe reviewed the other patchset on swap entry handlings: https://lore.kernel.org/all/20240221125753.GQ13330@nvidia.com/ Revert its meaning back to original. It shouldn't have any functional change as we should be ready with guards on !pXd_present() explicitly everywhere. Note that I also dropped the "#if CONFIG_PGTABLE_LEVELS > 2", it was there probably because it was breaking things when 3a194f3f8ad0 was proposed, according to the report here: https://lore.kernel.org/all/Y2LYXItKQyaJTv8j@intel.com/ Now we shouldn't need that. Instead of reverting to _PAGE_PSE raw check, leverage pXd_leaf(). Cc: Naoya Horiguchi Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: x86@kernel.org Signed-off-by: Peter Xu --- arch/x86/mm/hugetlbpage.c | 18 ++++-------------- 1 file changed, 4 insertions(+), 14 deletions(-) diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c index 5804bbae4f01..8362953a24ce 100644 --- a/arch/x86/mm/hugetlbpage.c +++ b/arch/x86/mm/hugetlbpage.c @@ -20,29 +20,19 @@ #include /* - * pmd_huge() returns 1 if @pmd is hugetlb related entry, that is normal - * hugetlb entry or non-present (migration or hwpoisoned) hugetlb entry. - * Otherwise, returns 0. + * pmd_huge() returns 1 if @pmd is hugetlb related entry. */ int pmd_huge(pmd_t pmd) { - return !pmd_none(pmd) && - (pmd_val(pmd) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT; + return pmd_leaf(pmd); } /* - * pud_huge() returns 1 if @pud is hugetlb related entry, that is normal - * hugetlb entry or non-present (migration or hwpoisoned) hugetlb entry. - * Otherwise, returns 0. + * pud_huge() returns 1 if @pud is hugetlb related entry. */ int pud_huge(pud_t pud) { -#if CONFIG_PGTABLE_LEVELS > 2 - return !pud_none(pud) && - (pud_val(pud) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT; -#else - return 0; -#endif + return pud_leaf(pud); } #ifdef CONFIG_HUGETLB_PAGE