From patchwork Wed Mar 27 15:23:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13607069 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1D30ACD1280 for ; Wed, 27 Mar 2024 16:33:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=TKA/RW9ZsHe9fCFlFWoeINO0Af6Zzk9NhZnvT8F2PVo=; b=jvD+r7mBw09dFH pD/im0qDdItD7MdeHDJ3D/OoXii+LIQsh8GZQ87Rug5qOWB08pYui7fRGEmrwo3QB3QsPB1+TWVRv qcc4PMSZ+M9eyrSv/Sebn0kuFQSCzkC2lOxeD6c/3HL/cbeZ9Qo97wnl/FtQeIMB/UP2UJy7oFME6 HHjUyGw9pMyXuekVQR118bPrFlK0i55jgOxJeMYxlcM44QN80VTGlOPQzyZyVj1euidneSDw4DqfK 7FuwcwwkEWKA1X+do6KD5UwuUCXzwJymJSOHefoIGhteFJ43RaBpPgyYoJ/kLh6jCCNHEJUAGnWJI 4OSzceUU/9vsQsM97voA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpWDJ-0000000A6ND-1M01; Wed, 27 Mar 2024 16:33:21 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV7u-00000009lCH-1JlF for linux-arm-kernel@lists.infradead.org; Wed, 27 Mar 2024 15:23:45 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553021; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=bzTkuuEUZ2f7C9YouF2ORWZwnTLSeJ8Y5AJhy7kkISQ=; b=h22ptew/MXBwThchEuRM68RqrwH3QWPrA0S3ARrqr1mfR6v0ihyYlRCPs3Hn2PlA8tA1Ra IOOOyfaicG7o0GoNGsBrCzmuwin55v/ShIn5lcTmYgceI/BEzsIJtKeNsAMAz25UR56zZ3 XgkjVAchgFucafjKVivJQ3/1FCxk5V0= Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com [209.85.219.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-689-fFUUt82jOUKriNPpiCEFzg-1; Wed, 27 Mar 2024 11:23:38 -0400 X-MC-Unique: fFUUt82jOUKriNPpiCEFzg-1 Received: by mail-qv1-f71.google.com with SMTP id 6a1803df08f44-69152af7760so25772176d6.1 for ; Wed, 27 Mar 2024 08:23:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711553018; x=1712157818; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bzTkuuEUZ2f7C9YouF2ORWZwnTLSeJ8Y5AJhy7kkISQ=; b=kD2XI6YRzfXYZ+rnl0WILlPHEFxRLJtVfO2VpLGLEE7J3YwLcwQsTu/rvu7DiEP3id FNVsZBDNFAQHaGz1wY8I9Etmvibm5V8K5Jr7kqM6RDRRmHu6M5T71MPI+jg83ks44blo L/afsqzwnSdYzdjqdZX8XyqJGjPW2LqUEwQuejZrpwqZVOUrq4mak5EeSYsUu+eMvJsk 3Mzph7nKNl5hbF2+8Jyy5IH9S6LqvqbH1+J43odMQAf7FklreEvvXfXKJG+ZJozO/1iF nOIln5TZQoK3JT8mjv//qkD5ecaPbd1gXfFmx3CzcEiHW9YpVviQBrkynehkM7jK9arB 3phA== X-Forwarded-Encrypted: i=1; AJvYcCW+GXJQHI1iAp+UlIPCv76qmcjKCy8xHCuvt2l1feLaeFecAPmdfFgk5qPnQVtdQOUzfoxh/Fzdp2np8Wu97HHaLOlsU2/FLT5LDR86WouUyg2djiI= X-Gm-Message-State: AOJu0YwNyGVwgfwaoqFCso8U94XqtK6eu3lHqKWDUywgcOmbdf622CFS MF0Yi10VgLWq+IIsQ/e1RT62Uqb7STV46Daw34PLgvvAONp5POGdUvqbjbCG23SAZtqskdvS0Fa R0EfL9gy1ALR46HHZRI6BUhpHg/JJ09NH/jaAb7RewJcIbqkNlglPNSWcM7ujAYRFJNUFzbUG X-Received: by 2002:a05:6214:5d11:b0:690:3c85:c5b with SMTP id me17-20020a0562145d1100b006903c850c5bmr14918733qvb.3.1711553017587; Wed, 27 Mar 2024 08:23:37 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFplzIHsd2VycAjj5oUNWAa5qW9RFPqJwKtRaT5aqzv2S9fkhcwA1AKc/2h1PXS7yh/3x/6aw== X-Received: by 2002:a05:6214:5d11:b0:690:3c85:c5b with SMTP id me17-20020a0562145d1100b006903c850c5bmr14918682qvb.3.1711553016992; Wed, 27 Mar 2024 08:23:36 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id hu4-20020a056214234400b00690dd47a41csm6412639qvb.86.2024.03.27.08.23.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Mar 2024 08:23:36 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Yang Shi , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Michael Ellerman , peterx@redhat.com, Andrew Jones , Muchun Song , linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Christophe Leroy , Andrew Morton , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox , Rik van Riel , linux-arm-kernel@lists.infradead.org, Andrea Arcangeli , David Hildenbrand , "Aneesh Kumar K . V" , Vlastimil Babka , James Houghton , Jason Gunthorpe , Mike Rapoport , Axel Rasmussen Subject: [PATCH v4 01/13] mm/Kconfig: CONFIG_PGTABLE_HAS_HUGE_LEAVES Date: Wed, 27 Mar 2024 11:23:20 -0400 Message-ID: <20240327152332.950956-2-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240327152332.950956-1-peterx@redhat.com> References: <20240327152332.950956-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240327_082342_499646_D878CC75 X-CRM114-Status: GOOD ( 10.72 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Peter Xu Introduce a config option that will be selected as long as huge leaves are involved in pgtable (thp or hugetlbfs). It would be useful to mark any code with this new config that can process either hugetlb or thp pages in any level that is higher than pte level. Reviewed-by: Jason Gunthorpe Signed-off-by: Peter Xu --- mm/Kconfig | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/mm/Kconfig b/mm/Kconfig index b924f4a5a3ef..497cdf4d8ebf 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -850,6 +850,12 @@ config READ_ONLY_THP_FOR_FS endif # TRANSPARENT_HUGEPAGE +# +# The architecture supports pgtable leaves that is larger than PAGE_SIZE +# +config PGTABLE_HAS_HUGE_LEAVES + def_bool TRANSPARENT_HUGEPAGE || HUGETLB_PAGE + # # UP and nommu archs use km based percpu allocator # From patchwork Wed Mar 27 15:23:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13607068 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 418C2CD1287 for ; Wed, 27 Mar 2024 16:33:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=rjtzFFU9XJWzkfz14BQAuJv10+HoB780uRU/MUONZQs=; b=hMlXQiRdgJ1Ugx WMF0cpw7zcH4oSTosZpumaqWuDCgX7xWORU29tAAG4f/imCSCn8KxMzscJCHVb4Vj3wVU+GxUWNad Y+b383C7r3OEwZhkzlO2nz/F9jUDDlw1l6uP70dHOkRKsCe5lTLZe2jle0CY/tZpEc2+6BHvMRFp0 2vQIGabpB8tqnK1LtQyrL3GFtbcSxf5gVKhjCW/O+5IZDvjrsP4neou/zPSoIdseu7lJN+UzC8MG6 /RL8xvdZRFmkcPeRB/HKzFMjej6O3+ez+2CVBKE1OnK1RiRCovEcfVW/zGhQQdBMX0PxHNmGXq9dS II5oTxvecTTsvJL6WUYQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpWDK-0000000A6Nd-2WFJ; Wed, 27 Mar 2024 16:33:22 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV7v-00000009lE8-2IWo for linux-arm-kernel@lists.infradead.org; Wed, 27 Mar 2024 15:23:46 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553022; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/DKy2rskdC2Yvy4b+7pXCefd1yMSUUO/lg/gxYCRDGA=; b=aoN3pVcPvyAcPQpwuXfd3RdJT8ZpUXibZWeKInIQSiracShDxl9WA9pahRbipoTUvbICBv qr79JD2UmsSGcV3B/M80unEBmyXJm01f08P+pz5xiXPwlEzxuKsOtHvDxSzKeNO2f6zHqe xNONbzJiAzIQ9bqwqkRbFWaNzNFBb4A= Received: from mail-oi1-f198.google.com (mail-oi1-f198.google.com [209.85.167.198]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-680-1qGbwPI0MCudO94UgMKurw-1; Wed, 27 Mar 2024 11:23:40 -0400 X-MC-Unique: 1qGbwPI0MCudO94UgMKurw-1 Received: by mail-oi1-f198.google.com with SMTP id 5614622812f47-3c22fa01f9dso321876b6e.0 for ; Wed, 27 Mar 2024 08:23:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711553020; x=1712157820; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/DKy2rskdC2Yvy4b+7pXCefd1yMSUUO/lg/gxYCRDGA=; b=KRij6UkBQ1j9IutlfN7URroduVOQ0XSjSI04RvGqXY8O/y8EA7jTewnHxlv+qgrbsc znsCC7MPMuyjpvnFUehISUfKJJhLm5puFIh440AYCVQRbkWR2mc6ebDC73KfS9pAWbcp Owesg2eaLcfk8Ah8tAoJDa/eMlzUi4SAv8Az5fct9o2Png/9LzoBNqRjv0MSZxJqgJ59 oI/FZleZ+2wZbzHIgxL1npoUP1V1/tABX02wVyWSdsgKGTs2N04bw4xZrjbDZIWPcBUq lEfzt3xxQDcaox3djHrqxlAqBfrs3u0TYsZLl9ror6ewFgwtqx0D6hRsSmgJX0oyurwH j4RA== X-Forwarded-Encrypted: i=1; AJvYcCXYO/DzWPyPvUzJNPAfHdcKpm2yo4yiyVcAw6rw47D3YW+uH9bapV8B3R6Qdy4jOHioivjXHsFjmUlphAnl77/3vLfl1nZXgCVu4IVX6Yhz2mBSHRo= X-Gm-Message-State: AOJu0YxN8sP/KrAdpjgE4n6zYMD8jbaDtGYoeRwd2d0zvPTyl7gQQspZ mBVPBdKJbuKn47iNn9dqAGRnUv0gVOBhlb87QVaMnKIxBjY/Pr+tweZH7lMMJOGJGron83Y6qxg Py7CubmN77V/FbrEzJp5OVqDzIbZ+kM16QSf3Cbpfc0MrzHVHHlIf1E6hMCylX2JwVDJNXjTv X-Received: by 2002:a05:6808:1814:b0:3c3:d28a:b1a8 with SMTP id bh20-20020a056808181400b003c3d28ab1a8mr370355oib.0.1711553020020; Wed, 27 Mar 2024 08:23:40 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEUSCRMAKzeIW51sHYtQ4kGfBJ1LCcy7xm5bPm4kr5W7dAQlFQDGhJMDaaF/OcaFnro9iqkpQ== X-Received: by 2002:a05:6808:1814:b0:3c3:d28a:b1a8 with SMTP id bh20-20020a056808181400b003c3d28ab1a8mr370311oib.0.1711553019443; Wed, 27 Mar 2024 08:23:39 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id hu4-20020a056214234400b00690dd47a41csm6412639qvb.86.2024.03.27.08.23.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Mar 2024 08:23:38 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Yang Shi , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Michael Ellerman , peterx@redhat.com, Andrew Jones , Muchun Song , linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Christophe Leroy , Andrew Morton , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox , Rik van Riel , linux-arm-kernel@lists.infradead.org, Andrea Arcangeli , David Hildenbrand , "Aneesh Kumar K . V" , Vlastimil Babka , James Houghton , Jason Gunthorpe , Mike Rapoport , Axel Rasmussen Subject: [PATCH v4 02/13] mm/hugetlb: Declare hugetlbfs_pagecache_present() non-static Date: Wed, 27 Mar 2024 11:23:21 -0400 Message-ID: <20240327152332.950956-3-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240327152332.950956-1-peterx@redhat.com> References: <20240327152332.950956-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240327_082343_932604_9C82F52F X-CRM114-Status: GOOD ( 13.44 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Peter Xu It will be used outside hugetlb.c soon. Signed-off-by: Peter Xu --- include/linux/hugetlb.h | 9 +++++++++ mm/hugetlb.c | 4 ++-- 2 files changed, 11 insertions(+), 2 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index d748628efc5e..294c78b3549f 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -174,6 +174,9 @@ u32 hugetlb_fault_mutex_hash(struct address_space *mapping, pgoff_t idx); pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, pud_t *pud); +bool hugetlbfs_pagecache_present(struct hstate *h, + struct vm_area_struct *vma, + unsigned long address); struct address_space *hugetlb_page_mapping_lock_write(struct page *hpage); @@ -1228,6 +1231,12 @@ static inline void hugetlb_register_node(struct node *node) static inline void hugetlb_unregister_node(struct node *node) { } + +static inline bool hugetlbfs_pagecache_present( + struct hstate *h, struct vm_area_struct *vma, unsigned long address) +{ + return false; +} #endif /* CONFIG_HUGETLB_PAGE */ static inline spinlock_t *huge_pte_lock(struct hstate *h, diff --git a/mm/hugetlb.c b/mm/hugetlb.c index f9640a81226e..65b9c9a48fd2 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6110,8 +6110,8 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, /* * Return whether there is a pagecache page to back given address within VMA. */ -static bool hugetlbfs_pagecache_present(struct hstate *h, - struct vm_area_struct *vma, unsigned long address) +bool hugetlbfs_pagecache_present(struct hstate *h, + struct vm_area_struct *vma, unsigned long address) { struct address_space *mapping = vma->vm_file->f_mapping; pgoff_t idx = linear_page_index(vma, address); From patchwork Wed Mar 27 15:23:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13606825 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7E48FC47DD9 for ; Wed, 27 Mar 2024 15:24:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=lWEa0JGrJBECUVNGaif9hj3DTt/MsZ002/mh/JptCws=; b=gCYDp7dg5IlySa b+N++TkjezItbokj9KlnJDZkEuo9L6zjPjQ7l6NgcAYIFOeAtnNUUwnkrDm++fN+QajoscIU2Rzhn 52luAf9a1JoSJFAMoAUvcKua1SzY2+XVNEzsrUMaOhscyR93XsB1uBgVfJUGOaGUxHHDOkRsnOGoi 37T9Km5/sY5ivLyTxNx3uhPr7KpO4XiGl1lA1+HVkUV7gNdsdl9QqaTg6PtH4WUolFxmtVZbhP1dn PAUH7v4+tmcd0mHmHEW1hpBtIVjJROt3TeJzbkIdWM9jnHDFBb3j2iwPm/FEzNuQ187344H80sTWO GO911oMraGQl7FoxPURQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV81-00000009lKt-4C4O; Wed, 27 Mar 2024 15:23:50 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV7w-00000009lEy-48Ux for linux-arm-kernel@lists.infradead.org; Wed, 27 Mar 2024 15:23:47 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553024; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/g69WjpU0rXeMHNQq0wHJ20ikWbVhFRACidICzEeTCE=; b=bkvBedbze8RVNVsEW+6xD5cM6TwuRhHYlCAAPcieiEM5mVWDPZ/MUSSoZ8et5+FudChwbr ll8o6EhgzahcYrgnrDilfqGB8eYJn6ogSYlPOLu6QF06a32wzXjE5AWzt5YD7LGKMXIRWl rFOHvUuSH01eEIpR8piDboLY6uhQGGs= Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-342-srlX3xH-OLuKyHUGH2Z5hg-1; Wed, 27 Mar 2024 11:23:42 -0400 X-MC-Unique: srlX3xH-OLuKyHUGH2Z5hg-1 Received: by mail-qv1-f70.google.com with SMTP id 6a1803df08f44-69672754479so14006156d6.1 for ; Wed, 27 Mar 2024 08:23:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711553022; x=1712157822; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/g69WjpU0rXeMHNQq0wHJ20ikWbVhFRACidICzEeTCE=; b=kLIBXBYsx8UGyMOKqRUmk8xVl+KtibHrtCg72amoqdDXdueqGEirsJOprxCp8pA24t 0I7BDy9Mo238ZhpsVhsrNkxzsccd/cIbz8YBZz7LOj13xdcYgSZ7hYJZRl7ydZzLr8EA ZX1uxQn2Fs4vEl7QFubZaBTdNYF1rraeZ54fTJaYvJAwzhLZTxOkK3tRAw9P6hmx9Pmi CYh31lDgvfkUOTttvCiNXMXiDaOVixsPsaMSOeVpv5sCgosOUYuyZl1GeFUyjYP38Hne FwpeLDvNdCMYB2OKzK6ozZ8JJKILsA7vh3cY8Zk3eZX4QbPkHRHfTJQySfyR9PxhFJ4Y /1qg== X-Forwarded-Encrypted: i=1; AJvYcCVc1qGr/IHOBqnSFRXanU8bZOmQDVtWFU8rMGNpiU6fyMs9n6QwPkTXxNbUeVTiRJlcSLMlsB3iSesvfBTXF1aYmWYWxkC6aEzklNriAOURpbK/Ny0= X-Gm-Message-State: AOJu0Ywi3cQDT0KySszsxETXOdIlOGDU61owbRDBjmmN9+yi0w/fKSVF 4PneN2R4to6kxaC5sr+/XMtsUHLsGS9k7Ss9HK5anmJeyH0cs95Zq1E9iN7W2ufxTtEKfXZ9wjX XOksQq9Fgf1FubywaCc0oyg1FqrATVLfLoh+5QlF6r8jRVTHIbpRmxVkaWZxUa+KcvmTDlnUm X-Received: by 2002:a05:6214:3187:b0:691:456f:415a with SMTP id lb7-20020a056214318700b00691456f415amr14772476qvb.4.1711553022260; Wed, 27 Mar 2024 08:23:42 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFwFABTpt0oYC5XP41rgqaavJwjTuxekGLogedD16Vy17R/DxfwS2XqF10qrZ+lSq8ZmFvRzQ== X-Received: by 2002:a05:6214:3187:b0:691:456f:415a with SMTP id lb7-20020a056214318700b00691456f415amr14772425qvb.4.1711553021719; Wed, 27 Mar 2024 08:23:41 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id hu4-20020a056214234400b00690dd47a41csm6412639qvb.86.2024.03.27.08.23.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Mar 2024 08:23:40 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Yang Shi , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Michael Ellerman , peterx@redhat.com, Andrew Jones , Muchun Song , linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Christophe Leroy , Andrew Morton , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox , Rik van Riel , linux-arm-kernel@lists.infradead.org, Andrea Arcangeli , David Hildenbrand , "Aneesh Kumar K . V" , Vlastimil Babka , James Houghton , Jason Gunthorpe , Mike Rapoport , Axel Rasmussen Subject: [PATCH v4 03/13] mm: Make HPAGE_PXD_* macros even if !THP Date: Wed, 27 Mar 2024 11:23:22 -0400 Message-ID: <20240327152332.950956-4-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240327152332.950956-1-peterx@redhat.com> References: <20240327152332.950956-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240327_082345_255461_8319C2C4 X-CRM114-Status: GOOD ( 13.37 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Peter Xu These macros can be helpful when we plan to merge hugetlb code into generic code. Move them out and define them as long as PGTABLE_HAS_HUGE_LEAVES is selected, because there are systems that only define HUGETLB_PAGE not THP. One note here is HPAGE_PMD_SHIFT must be defined even if PMD_SHIFT is not defined (e.g. !CONFIG_MMU case); it (or in other forms, like HPAGE_PMD_NR) is already used in lots of common codes without ifdef guards. Use the old trick to let complations work. Here we only need to differenciate HPAGE_PXD_SHIFT definitions. All the rest macros will be defined based on it. When at it, move HPAGE_PMD_NR / HPAGE_PMD_ORDER over together. Signed-off-by: Peter Xu --- include/linux/huge_mm.h | 29 +++++++++++++++-------------- 1 file changed, 15 insertions(+), 14 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 7576025db55d..d3bb25c39482 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -64,9 +64,6 @@ ssize_t single_hugepage_flag_show(struct kobject *kobj, enum transparent_hugepage_flag flag); extern struct kobj_attribute shmem_enabled_attr; -#define HPAGE_PMD_ORDER (HPAGE_PMD_SHIFT-PAGE_SHIFT) -#define HPAGE_PMD_NR (1< X-Patchwork-Id: 13606834 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A06A6C54E67 for ; Wed, 27 Mar 2024 15:27:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=qTrH4Mp15l+ROmvYeS750aQTjtqy9o4A7c5q9RWjV9g=; b=AOShJvuTtorEkK t2DZe1CD7cpOUpeOkHlOrRguiCPCNByC0E9A5vS4fnjmHu5qgeIHxelQcxDvW7B77kzNRkha0Zwxq c+7arkmAQanNiugnGhiKbDsmoCbrfmLrgrOdYyIsAn86yxv7WWh5SeA/+W18BUzazHcrd0hyBmvpD 37OmULnXZOm63rtI59teJA/wQuKC4LBWBnYJ49TgoOEBXDFxwQHo+VLegvIuBdyimAzESp+izRxkg iOu7lhZXqEGazCPWP3Ryki9Lm95MaczKjl519Eq5QpjCRfbEVgzhOFxD2u7RNM4+Ilm5rephgwUdt UTI0Ys+DWUVMqjjaJGYQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpVB6-00000009nF8-3Bms; Wed, 27 Mar 2024 15:27:00 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV84-00000009lMX-1VkS for linux-arm-kernel@bombadil.infradead.org; Wed, 27 Mar 2024 15:23:52 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Content-Transfer-Encoding :MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Sender:Reply-To:Content-ID:Content-Description; bh=iN7hVKxpU9yGNoa0FvV6QQfaV15Sbc5QZKf/pqr08oc=; b=rDQo7ngeFL6dXnYlJ/hTQ6s7Sw Pjcy8ufQkpzoKe53+t6qqhmL2GmR+NebLU4bq/AIcq/lMmrIvgRusEFyg+JZtStCb1m9pbIzkbHra BQfywIhxnFLn+Cue9Iud19DwtFkq2esNmcOfAk5APzQ4NaFjt2xEC5Wj3oQGLCA/PRREyWPu41IpH gAYtnzBuuMDF1BuWzWMfEFuzyk5EArBh/My2OqlSDcclpBvtgEhBXll9RWKFpI/LrQG4MC+jIVoOY wiW+8cuKuahhUyKHDMCXYEIVVHE3HjWu2avXR3ZeOxLLH/mGfi4sw9gEKrvwM64qd4EkZ0asfJH4V gfIsPr6A==; Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by desiato.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV7z-00000000KzF-3vQA for linux-arm-kernel@lists.infradead.org; Wed, 27 Mar 2024 15:23:51 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553026; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iN7hVKxpU9yGNoa0FvV6QQfaV15Sbc5QZKf/pqr08oc=; b=E0m8c0g4mMcVpRGlZq3lof8PUsb6a7CeiZTHQhuS5Deitn7B3/6mUZJkHOYFCUK5u+VK8+ gNN6+/+fKJNfKjNqYJK5MxZmuIg9GV3UiDZOtn60DpbXqKNYof4bF4TwhtAtI1wpHO5vdF hbduVgMnvApSArxYYTzzbQDAGnIHV8k= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553027; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iN7hVKxpU9yGNoa0FvV6QQfaV15Sbc5QZKf/pqr08oc=; b=YoUFMiqFRQwhob84m5cE7rl4ymoqHhlX4EDmjLX6XYrePEOIIbfmCh/MAOwdaYeQHBYR+R zahoAgoWPVh3bqDzBgpodsIDqI1r7v4l2TP9JFH8fqbyuDel/wgMjx+Yz4rmFKVIbWqbHw hXtPSMsy4uouiL7XlpLU7jvFHIs5uZE= Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com [209.85.219.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-80-xvzDmXT3NJSZnIsuwvaUfw-1; Wed, 27 Mar 2024 11:23:45 -0400 X-MC-Unique: xvzDmXT3NJSZnIsuwvaUfw-1 Received: by mail-qv1-f71.google.com with SMTP id 6a1803df08f44-696a9c482a4so3595846d6.0 for ; Wed, 27 Mar 2024 08:23:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711553025; x=1712157825; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=iN7hVKxpU9yGNoa0FvV6QQfaV15Sbc5QZKf/pqr08oc=; b=poHZfHxFIJBv93Ql3oAE15/hLnpPFN6vUrjeSRPXxRCwqEBe0G+un7hekiYk+G3v51 etLyXeecW2FufJnTuZIZbp6oah1518E9ZzQk5k/WJ4y4Z2K+n+3zrRzUNdCjb7SOWyka veWaNtdSY56xON5/owgkDyVWrVo678tZ4KMYJAGiZybJ1tXIicD1J5tKSHKX67YszKEm 2WDJ8pwWGwUnb8bscReEFQ3fY9WasL3f7dZe4Locp+pTvLZWy+z1Bj6tVOWgYuTMi4n9 mPPBK/wfNbDW26ujy6YixALHiSZvZl4f47F7Ads3Yd1d6FbwptTGI0bh1HPrgxDkWi56 QgWQ== X-Forwarded-Encrypted: i=1; AJvYcCXPYLOF5JHedGAiUWTq4xk21cp8ho0TyRgwDesDxB9pgVFz9WvbQeZiP4t1Gn4tvlhoDtLtCvrjHUTFngDJh6OSYE3vz95Vxm8F7SaZr6y2puDHqCA= X-Gm-Message-State: AOJu0YyR/RHuhPLydRGuUuKgsIEd5fQ4Z9tqcFox3JenwLbIX+zMIvMA bc8jQUTL7fFbj7OrvLOz+yvebT6Ru3oIzLVF1Z9dISPXngZIhJ/oI0qWDKgQJRh5UK9r3DQ8r9z CJq3M9tMD5yMqlF/AaFRDtuFpHalBOmTUq7X7uwbTUDPO6erMP8xq7UmoA7lVqOszu/ttdEn6 X-Received: by 2002:a05:6214:4598:b0:696:7b32:cceb with SMTP id op24-20020a056214459800b006967b32ccebmr13869548qvb.6.1711553024684; Wed, 27 Mar 2024 08:23:44 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFS0Dr3UNRPnI4el/D5x8gcrtCXLMsQwIvd4dOzHJoXuoNg03xhCSCfsu9VpmJMAiYn3G+95A== X-Received: by 2002:a05:6214:4598:b0:696:7b32:cceb with SMTP id op24-20020a056214459800b006967b32ccebmr13869513qvb.6.1711553024070; Wed, 27 Mar 2024 08:23:44 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id hu4-20020a056214234400b00690dd47a41csm6412639qvb.86.2024.03.27.08.23.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Mar 2024 08:23:43 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Yang Shi , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Michael Ellerman , peterx@redhat.com, Andrew Jones , Muchun Song , linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Christophe Leroy , Andrew Morton , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox , Rik van Riel , linux-arm-kernel@lists.infradead.org, Andrea Arcangeli , David Hildenbrand , "Aneesh Kumar K . V" , Vlastimil Babka , James Houghton , Jason Gunthorpe , Mike Rapoport , Axel Rasmussen Subject: [PATCH v4 04/13] mm: Introduce vma_pgtable_walk_{begin|end}() Date: Wed, 27 Mar 2024 11:23:23 -0400 Message-ID: <20240327152332.950956-5-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240327152332.950956-1-peterx@redhat.com> References: <20240327152332.950956-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240327_152348_222418_C15E1ED2 X-CRM114-Status: GOOD ( 12.81 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Peter Xu Introduce per-vma begin()/end() helpers for pgtable walks. This is a preparation work to merge hugetlb pgtable walkers with generic mm. The helpers need to be called before and after a pgtable walk, will start to be needed if the pgtable walker code supports hugetlb pages. It's a hook point for any type of VMA, but for now only hugetlb uses it to stablize the pgtable pages from getting away (due to possible pmd unsharing). Reviewed-by: Christoph Hellwig Reviewed-by: Muchun Song Signed-off-by: Peter Xu --- include/linux/mm.h | 3 +++ mm/memory.c | 12 ++++++++++++ 2 files changed, 15 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index afe27ff3fa94..d8f78017d271 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -4233,4 +4233,7 @@ static inline bool pfn_is_unaccepted_memory(unsigned long pfn) return range_contains_unaccepted_memory(paddr, paddr + PAGE_SIZE); } +void vma_pgtable_walk_begin(struct vm_area_struct *vma); +void vma_pgtable_walk_end(struct vm_area_struct *vma); + #endif /* _LINUX_MM_H */ diff --git a/mm/memory.c b/mm/memory.c index 3d0c0cc33c57..27d173f9a521 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6438,3 +6438,15 @@ void ptlock_free(struct ptdesc *ptdesc) kmem_cache_free(page_ptl_cachep, ptdesc->ptl); } #endif + +void vma_pgtable_walk_begin(struct vm_area_struct *vma) +{ + if (is_vm_hugetlb_page(vma)) + hugetlb_vma_lock_read(vma); +} + +void vma_pgtable_walk_end(struct vm_area_struct *vma) +{ + if (is_vm_hugetlb_page(vma)) + hugetlb_vma_unlock_read(vma); +} From patchwork Wed Mar 27 15:23:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13606828 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0E36DC47DD9 for ; Wed, 27 Mar 2024 15:25:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=s4hlehMX5YSvmI2XkAlunMV7CRoMfed0BQcutfQLIdk=; b=GpHm52FAIb/SjA mh9LunXZAs051AwW/gMRRKncLeUzkdpvO9XV8IzifMjuc63nGLAoNK/2mcDpqLwa58JFt8Cx6a5na tCNAGUdXSLS8v1ak18Rkh5yJXoP82sXmklGdhNBO+12e5BG/jNqqk57XByzH9gBX14oWkaXHm+lwK CNgxZMczA5wIqQO4mXLHGYqoh5LjQQrpK7T68VdksYAc+aV2mGLiK6BIXrYgwBaifWIn30C8FKdRY k5r/4hvxC1CMbWSKGP5NJmrgqYmrmjwAF0ZJrU2fhwWrYefMcNmRmdZcKYTRqUk25B7SBk+Oe3Vaz mlAqoIgkDEOdnqP1zSFg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV9T-00000009mME-3xNC; Wed, 27 Mar 2024 15:25:19 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV81-00000009lK7-3rfL for linux-arm-kernel@lists.infradead.org; Wed, 27 Mar 2024 15:23:54 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553029; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VkfUQlJiw2YmDi669w4oXgCkjfbnGDZMVpc2LJneNK4=; b=AVsJQ3oxZori/0jbiU/hEUHGTPyOIfF2KtXrxRs+X9qPY6bpKKg3pU3mS80iG+/rUMPxLq aQKTRNxV677HRcYDhyG05TlV3K7yqcgzl97J0robQAfWSrJEq7Qlz53NpFNy5DqVt8zadu G+7oNcVo6BMi+6axzYrp57Fo5oxz+yY= Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-168-mOsYenfYNFaK12tl1v8txg-1; Wed, 27 Mar 2024 11:23:47 -0400 X-MC-Unique: mOsYenfYNFaK12tl1v8txg-1 Received: by mail-qv1-f70.google.com with SMTP id 6a1803df08f44-69627b26a51so3093346d6.1 for ; Wed, 27 Mar 2024 08:23:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711553027; x=1712157827; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VkfUQlJiw2YmDi669w4oXgCkjfbnGDZMVpc2LJneNK4=; b=pMN9PYrygVIlmoU9z5eOmpiewWXktLWnKYo9hvhQgJuhZsq3KAKhWw3kdhyGkeX4pl PjrgwED8NQS7LbhTWLQnAV9zI1MZheFiKomSWgZ/5GszsQX+6fbcJb36gBuJQ44RXkRm U4BVIenjW4NYAM0hYDnYLTVDVoDBWSZ3G4BiYJAvkZieggRd9yPcn8WNj0FRQqdNspIT 2OiHDgzQ593o+m/C0RiXx96KtovdV6qWxRYGX6sMfgbwN6dCI/17/o75ide4cQ2GgkYm /0rf4r5SQJlu0GVcRAJaZl9wPRJrwY9guWInQdkQABw6nzrSR/IOYep9cnMulUql4mex 1ZWA== X-Forwarded-Encrypted: i=1; AJvYcCW99PfFV/uMw5kG+wuQ5qLpTMITLDMln0S5THEKL8u+PrKgZ2HEWjOk39Imx1xLy46iS9sW2kQYVj8Z8vqxIhUegI515pP5/loOqkdFxriHCcEQlN8= X-Gm-Message-State: AOJu0YxOUBd+7rjPM0yAhvWKiaHIHf3t6luFflRCbq+rVL2XCcE9CP/5 p+IwMmi26Fu3TYK9b55vavUBT1795Bux/fzyyq9zWXzBG/oysc1Hyc0XygJm6LiGY32dkdGFtjd UhfEk+XJzgt6aa3Cf5PIFoA1i4B6NTwPYU2n6LPlbkU9gLd9rTyO84oPBLk82kK098IHomitI X-Received: by 2002:a05:6214:3105:b0:696:6f59:4d19 with SMTP id ks5-20020a056214310500b006966f594d19mr14917308qvb.1.1711553026951; Wed, 27 Mar 2024 08:23:46 -0700 (PDT) X-Google-Smtp-Source: AGHT+IG/cvaZbNzejgmgOgxRck5pBSr/g3hsLSZETHWpBhBPVR+VZkVf7lYhXdSr9+e3t1xSA2Wfog== X-Received: by 2002:a05:6214:3105:b0:696:6f59:4d19 with SMTP id ks5-20020a056214310500b006966f594d19mr14917279qvb.1.1711553026413; Wed, 27 Mar 2024 08:23:46 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id hu4-20020a056214234400b00690dd47a41csm6412639qvb.86.2024.03.27.08.23.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Mar 2024 08:23:45 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Yang Shi , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Michael Ellerman , peterx@redhat.com, Andrew Jones , Muchun Song , linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Christophe Leroy , Andrew Morton , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox , Rik van Riel , linux-arm-kernel@lists.infradead.org, Andrea Arcangeli , David Hildenbrand , "Aneesh Kumar K . V" , Vlastimil Babka , James Houghton , Jason Gunthorpe , Mike Rapoport , Axel Rasmussen Subject: [PATCH v4 05/13] mm/arch: Provide pud_pfn() fallback Date: Wed, 27 Mar 2024 11:23:24 -0400 Message-ID: <20240327152332.950956-6-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240327152332.950956-1-peterx@redhat.com> References: <20240327152332.950956-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240327_082350_335218_64371EBF X-CRM114-Status: GOOD ( 14.13 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Peter Xu The comment in the code explains the reasons. We took a different approach comparing to pmd_pfn() by providing a fallback function. Another option is to provide some lower level config options (compare to HUGETLB_PAGE or THP) to identify which layer an arch can support for such huge mappings. However that can be an overkill. Cc: Mike Rapoport (IBM) Cc: Matthew Wilcox Reviewed-by: Jason Gunthorpe Signed-off-by: Peter Xu Signed-off-by: Peter Xu --- arch/riscv/include/asm/pgtable.h | 1 + arch/s390/include/asm/pgtable.h | 1 + arch/sparc/include/asm/pgtable_64.h | 1 + arch/x86/include/asm/pgtable.h | 1 + include/linux/pgtable.h | 10 ++++++++++ 5 files changed, 14 insertions(+) diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index 20242402fc11..0ca28cc8e3fa 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -646,6 +646,7 @@ static inline unsigned long pmd_pfn(pmd_t pmd) #define __pud_to_phys(pud) (__page_val_to_pfn(pud_val(pud)) << PAGE_SHIFT) +#define pud_pfn pud_pfn static inline unsigned long pud_pfn(pud_t pud) { return ((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT); diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h index 1a71cb19c089..6cbbe473f680 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -1414,6 +1414,7 @@ static inline unsigned long pud_deref(pud_t pud) return (unsigned long)__va(pud_val(pud) & origin_mask); } +#define pud_pfn pud_pfn static inline unsigned long pud_pfn(pud_t pud) { return __pa(pud_deref(pud)) >> PAGE_SHIFT; diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h index 4d1bafaba942..26efc9bb644a 100644 --- a/arch/sparc/include/asm/pgtable_64.h +++ b/arch/sparc/include/asm/pgtable_64.h @@ -875,6 +875,7 @@ static inline bool pud_leaf(pud_t pud) return pte_val(pte) & _PAGE_PMD_HUGE; } +#define pud_pfn pud_pfn static inline unsigned long pud_pfn(pud_t pud) { pte_t pte = __pte(pud_val(pud)); diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index cefc7a84f7a4..273f7557218c 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -234,6 +234,7 @@ static inline unsigned long pmd_pfn(pmd_t pmd) return (pfn & pmd_pfn_mask(pmd)) >> PAGE_SHIFT; } +#define pud_pfn pud_pfn static inline unsigned long pud_pfn(pud_t pud) { phys_addr_t pfn = pud_val(pud); diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 600e17d03659..75fe309a4e10 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1817,6 +1817,16 @@ typedef unsigned int pgtbl_mod_mask; #define pte_leaf_size(x) PAGE_SIZE #endif +/* + * We always define pmd_pfn for all archs as it's used in lots of generic + * code. Now it happens too for pud_pfn (and can happen for larger + * mappings too in the future; we're not there yet). Instead of defining + * it for all archs (like pmd_pfn), provide a fallback. + */ +#ifndef pud_pfn +#define pud_pfn(x) ({ BUILD_BUG(); 0; }) +#endif + /* * Some architectures have MMUs that are configurable or selectable at boot * time. These lead to variable PTRS_PER_x. For statically allocated arrays it From patchwork Wed Mar 27 15:23:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13606827 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4FF1BCD1284 for ; Wed, 27 Mar 2024 15:25:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=51JLFXZ3fpY3t7YZqbu9rItsSLiHQuX5op9RTk0R3/g=; b=YQh096j9D6neGd Rp5Pr1X66brX86eUqJNq8E+c47fDtDcMLGRntxbVlOYHngni05O7TwZZVULx0K+9rMpWRPqeUJ5kE XfIUFD0iekwHtX72cQiqdf1Dc15mG+TOAVfJqkWthJx8u6R5wmJokH3Xu/9Dqs/edfZ69TfmKN3JJ WComAN1EtwLXHZO2dZTt6JsV3tkcFGUAzyUuzfeM8t7F4F57Wv5qDRjtB/pyzk/BYFFHDQ+1TeO4c nDNb79k5sgG/gxvQZYBnksUHL54r9VeYS3wHMmlk19k9LNu1RoEXMXZPV/xnd+65YqDjX4KpMhEkd uotN6H86jzBeZU/PCpgg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV9X-00000009mP3-32lW; Wed, 27 Mar 2024 15:25:23 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV89-00000009lRX-1RdN for linux-arm-kernel@bombadil.infradead.org; Wed, 27 Mar 2024 15:23:57 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Content-Transfer-Encoding :MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Sender:Reply-To:Content-ID:Content-Description; bh=gD68z8/n0HSMW69QF19tjjGBtDgz4n3uypdcneHmRPc=; b=FuBiZQjP59v4kXTXfXU/PkGbt5 pnFjgPZzCDl7EALqxJhkijIjYUDQOjiNx6EWqoAsYdJsPbXUAfHxVc+Za0TG6LOenK5xMRwgzOfOj +lGlcS1KUWsIkO812zW4iXVYLkKQgQ3tDR6jnAEpdYzb74QdFgYh+0sQix3OrSUL/5O22rUUhXrkk H4MG5y1xkc7e+YRQCFFy3lUKzc1FB4tbR3yNCBjBhmtIW3NeIV76UhL2zqRb+lrR1FfC1fe5b1wXX PYTaJHNRhe9Av+FOfPYn54wAk5wO0akcN7XTm3Kk8mFGPzBBjx22n3PucNS0uR1wX+Kl0yz/kmltn BTM7cz1Q==; Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by desiato.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV83-00000000L1g-44a8 for linux-arm-kernel@lists.infradead.org; Wed, 27 Mar 2024 15:23:56 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553030; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gD68z8/n0HSMW69QF19tjjGBtDgz4n3uypdcneHmRPc=; b=URgRD63zRlDW9Tzk+yUQ6m3s3RquSQHPR4IFpodqqffSs0LHiUOuLjiK6wv7x74UWV7Q2Y 2SpzviHmAdtRg0QSfjeXuXA2cdvaXc7EASWvJ+xiwT5C3UfhzBpNDl0zarUubyu3//ICWm vVfDBpawqWWrgLfaqylb7D5lbY1wNxM= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553031; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gD68z8/n0HSMW69QF19tjjGBtDgz4n3uypdcneHmRPc=; b=MmVWvOTmTU6nujh5vqCWOvh40F2hfZGnujaJOTLKSL1yFnAGxQ4g3uLfFiByr/zlkmcOQY ilQbcnWPImt4PEV4bgatJGE6JiNqgsB0sGlqJhncvrw/Z5NtDTHzinR+sWw6JzChrB51nU VvMjCL3Zn3JP7dFwbrsQZySMmR31jtA= Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-681-y677CfHiNmKYwRQgIJnWGg-1; Wed, 27 Mar 2024 11:23:49 -0400 X-MC-Unique: y677CfHiNmKYwRQgIJnWGg-1 Received: by mail-qv1-f70.google.com with SMTP id 6a1803df08f44-690c19ee50bso2422036d6.0 for ; Wed, 27 Mar 2024 08:23:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711553029; x=1712157829; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gD68z8/n0HSMW69QF19tjjGBtDgz4n3uypdcneHmRPc=; b=k4tKwdEDSBH+G08nKdzLni59Ah7VxVj/HAFjpNdp8JFLfKSExiF5ljNjeQXZ85svY+ 7LVdkMwc8ApRf3m5GvCopaEXLyuMsphyZnY5EjPtKLP4uv40eaarRN5tEfc8fMXkN9nR mSHgIy/ln6nZD5wl6CLCxm95w/7rp8euBaeLuq1j8fPMfV1kU2Oupb3Cw2dgWrAMShv1 Omdj3toh3852yLImFGazkqqH8fC5gIt6tZVJCgGLT4kwrOQXmAWAfd716jq+IJUxq7Y7 y+mnq+QoDrv2+UDe4C7M0NQRwxCZYfpHOuAw4//yeLjid4cmddsrxNeY9HW68Btwusvk o0ZA== X-Forwarded-Encrypted: i=1; AJvYcCV7z4sWif0xZY1Izi3ctVZ/RpGOa26fpQ8gewac1YrenAHGxR665c4XvL9sN2zwIwGbOGCfcd/laldgvDuR1oudezUyMIXRfRCTAE0RN/ykUHhr38c= X-Gm-Message-State: AOJu0YyIKD9HszlXzSOYEVCF5mbby+Ome7x0qN3VyEJK7StdfdbDVKtk C58XnKKMRbuxzXWBYGPUorrxMylGIDP/QUzHQiG58brO3qKSJNzyvisZiL56iGtmS8unyBg3mLf NK7VTufMyAx3piwgNNVWaPHZq/Ds4EB1mMcmH7wdNXpPTHkPT/NQVnRBFDPJnaztTuIST4LRO X-Received: by 2002:a05:6214:3a01:b0:696:81b8:a462 with SMTP id nw1-20020a0562143a0100b0069681b8a462mr13234469qvb.0.1711553028776; Wed, 27 Mar 2024 08:23:48 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFaIIFSwEPr6utJECaPIJXCgmHTkHVcLUQEzRJpciko/79sjH/yOlRx7KIof08fJkHOKzrhqQ== X-Received: by 2002:a05:6214:3a01:b0:696:81b8:a462 with SMTP id nw1-20020a0562143a0100b0069681b8a462mr13234423qvb.0.1711553028305; Wed, 27 Mar 2024 08:23:48 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id hu4-20020a056214234400b00690dd47a41csm6412639qvb.86.2024.03.27.08.23.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Mar 2024 08:23:47 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Yang Shi , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Michael Ellerman , peterx@redhat.com, Andrew Jones , Muchun Song , linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Christophe Leroy , Andrew Morton , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox , Rik van Riel , linux-arm-kernel@lists.infradead.org, Andrea Arcangeli , David Hildenbrand , "Aneesh Kumar K . V" , Vlastimil Babka , James Houghton , Jason Gunthorpe , Mike Rapoport , Axel Rasmussen Subject: [PATCH v4 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing Date: Wed, 27 Mar 2024 11:23:25 -0400 Message-ID: <20240327152332.950956-7-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240327152332.950956-1-peterx@redhat.com> References: <20240327152332.950956-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240327_152353_789962_4905B9C6 X-CRM114-Status: GOOD ( 17.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Peter Xu Hugepd format for GUP is only used in PowerPC with hugetlbfs. There are some kernel usage of hugepd (can refer to hugepd_populate_kernel() for PPC_8XX), however those pages are not candidates for GUP. Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to file-backed mappings") added a check to fail gup-fast if there's potential risk of violating GUP over writeback file systems. That should never apply to hugepd. Considering that hugepd is an old format (and even software-only), there's no plan to extend hugepd into other file typed memories that is prone to the same issue. Drop that check, not only because it'll never be true for hugepd per any known plan, but also it paves way for reusing the function outside fast-gup. To make sure we'll still remember this issue just in case hugepd will be extended to support non-hugetlbfs memories, add a rich comment above gup_huge_pd(), explaining the issue with proper references. Cc: Christoph Hellwig Cc: Lorenzo Stoakes Cc: Michael Ellerman Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: Peter Xu --- mm/gup.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index e7510b6ce765..db35b056fc9a 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2832,11 +2832,6 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, return 0; } - if (!folio_fast_pin_allowed(folio, flags)) { - gup_put_folio(folio, refs, flags); - return 0; - } - if (!pte_write(pte) && gup_must_unshare(NULL, flags, &folio->page)) { gup_put_folio(folio, refs, flags); return 0; @@ -2847,6 +2842,14 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, return 1; } +/* + * NOTE: currently GUP for a hugepd is only possible on hugetlbfs file + * systems on Power, which does not have issue with folio writeback against + * GUP updates. When hugepd will be extended to support non-hugetlbfs or + * even anonymous memory, we need to do extra check as what we do with most + * of the other folios. See writable_file_mapping_allowed() and + * folio_fast_pin_allowed() for more information. + */ static int gup_huge_pd(hugepd_t hugepd, unsigned long addr, unsigned int pdshift, unsigned long end, unsigned int flags, struct page **pages, int *nr) From patchwork Wed Mar 27 15:23:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13606826 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E9CA2CD1280 for ; Wed, 27 Mar 2024 15:25:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=zEVNBg8T8Et2Dm/ag63gpwgoA7g2LJgPpgLAvkK8Nws=; b=t8+/U0H+f203q7 rYVRcJ5s23gZ+mfLY9h3gWa8fZDCouGZ47JLynielZAvXBJIPjNsRGSsLEr/oR/x+hjorzdjgfEM4 JKnrr+5j9k2YA9dbd/0AM09tQpHrq0Fe53qBSgqqIJ22R1jExU5ixQohD5X0fCOgn+EaLqp8b8KVY iv49ltKH5IU+iwQZWUUv9NFLfBFjYqmjjMhbn7yJnKBwVYWTm6z0e5I0ruq5Yg7BeiuSXz7oNSQZg hMTcD+gVsWB+7LKZxPaxxoFm6VxH/mQ1BmUNS8vnBCBVNpBaHFxh7iaFf/ILn0kCZiL6CQ9dZCu2Y dgzzta1FeSWuwaX+dAvg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV9W-00000009mO9-1Py5; Wed, 27 Mar 2024 15:25:22 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV86-00000009lPH-366F for linux-arm-kernel@lists.infradead.org; Wed, 27 Mar 2024 15:23:57 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553033; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QDTtuz/iBLnx3uLIw0aNQbjjBf12gK6PRqFn//yUSp8=; b=NB2Jw+1/zvAUDFua4KIxw3i7AD/PZA6GWDGWzJ0BOo8NhKNLhoVNZ4t5ZOYTqMM4jwjott akyruVwwIJz9HdM2lfZuNGu9BD+q/8XAPf8tG0uVT5Lz2v4TIIbBqJ0lsIq6yzdmNVIk/C k2fko7R2N/jgiNf81F2rjkwGTTTbOWo= Received: from mail-qv1-f72.google.com (mail-qv1-f72.google.com [209.85.219.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-493-2T_NkWyvMnuccK1WQp9H1A-1; Wed, 27 Mar 2024 11:23:52 -0400 X-MC-Unique: 2T_NkWyvMnuccK1WQp9H1A-1 Received: by mail-qv1-f72.google.com with SMTP id 6a1803df08f44-696a5972507so4146206d6.0 for ; Wed, 27 Mar 2024 08:23:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711553031; x=1712157831; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QDTtuz/iBLnx3uLIw0aNQbjjBf12gK6PRqFn//yUSp8=; b=NqAfaLgFUB2nSXW61MTcTHGVqAJxOigeH7a8HJWD6jEK3L5qmTB6bPFZauCulLJFRw cCl1p6MeiknhNx4dGXzABW2QfBaJLpFCAEx5hLH8/Pz0crM8C7mHik5XRigEeNcnxf2i BkJFY0eDXcvKAMl3i2BazYrhdknCBZBGz6xCwyeGrHzF8HufHzo8S8YzQWbktE/gyftL P9VPwk+u6Efzp8Eb5ODxVVATTBpyGqiqvD+FtE9CnW9Q5QZn0bCClRdP0tdLKkvwe5bW fIfoOPcenvKW/nIBUGM8gRegqBLgDAbKrHJpXNQeVMF8yRggfFE4nT4PhARtlOrlqEWs BmLQ== X-Forwarded-Encrypted: i=1; AJvYcCUn0urutUQjSg9r1YLpGJWUl/P6FM42w4Aw5vwvbhaVR4LWffb4JfUDpLkrNqkZm1OjQQJq0nGKy7SUjgBgrkTlooFYmWATwN1F4S/MGquHXZJR4pU= X-Gm-Message-State: AOJu0YxkUtDxNHC8vkTMdbf071ZSMCv932GdopBJXoaju0+5RuzBE2U3 qkPyfxPteis2TOylPXBleZJGrPZENp5IjR1ajyk/PhiO8nu8zhSNF22eEbM3jkaEuHlqB9Hhck0 nQ3syUwae/E6uJ/in++L21LuJmqZGs+nJzgH4ZueFxEPeTmgMImgiXY6ju58xQSD2O/+7Uwy3 X-Received: by 2002:a05:6214:2b86:b0:690:de72:316f with SMTP id kr6-20020a0562142b8600b00690de72316fmr14582146qvb.1.1711553030739; Wed, 27 Mar 2024 08:23:50 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEWXp2F9loH7HSDfuOw6HflxlC5J0wiBYnMNwM1iqpFTIhT0c0cqQhWkIb89hM3i5euenCqpA== X-Received: by 2002:a05:6214:2b86:b0:690:de72:316f with SMTP id kr6-20020a0562142b8600b00690de72316fmr14582105qvb.1.1711553030174; Wed, 27 Mar 2024 08:23:50 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id hu4-20020a056214234400b00690dd47a41csm6412639qvb.86.2024.03.27.08.23.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Mar 2024 08:23:49 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Yang Shi , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Michael Ellerman , peterx@redhat.com, Andrew Jones , Muchun Song , linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Christophe Leroy , Andrew Morton , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox , Rik van Riel , linux-arm-kernel@lists.infradead.org, Andrea Arcangeli , David Hildenbrand , "Aneesh Kumar K . V" , Vlastimil Babka , James Houghton , Jason Gunthorpe , Mike Rapoport , Axel Rasmussen Subject: [PATCH v4 07/13] mm/gup: Refactor record_subpages() to find 1st small page Date: Wed, 27 Mar 2024 11:23:26 -0400 Message-ID: <20240327152332.950956-8-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240327152332.950956-1-peterx@redhat.com> References: <20240327152332.950956-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240327_082355_137160_82B0852B X-CRM114-Status: GOOD ( 12.34 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Peter Xu All the fast-gup functions take a tail page to operate, always need to do page mask calculations before feeding that into record_subpages(). Merge that logic into record_subpages(), so that it will do the nth_page() calculation. Reviewed-by: Jason Gunthorpe Signed-off-by: Peter Xu --- mm/gup.c | 25 ++++++++++++++----------- 1 file changed, 14 insertions(+), 11 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index db35b056fc9a..c2881772216b 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2779,13 +2779,16 @@ static int __gup_device_huge_pud(pud_t pud, pud_t *pudp, unsigned long addr, } #endif -static int record_subpages(struct page *page, unsigned long addr, - unsigned long end, struct page **pages) +static int record_subpages(struct page *page, unsigned long sz, + unsigned long addr, unsigned long end, + struct page **pages) { + struct page *start_page; int nr; + start_page = nth_page(page, (addr & (sz - 1)) >> PAGE_SHIFT); for (nr = 0; addr != end; nr++, addr += PAGE_SIZE) - pages[nr] = nth_page(page, nr); + pages[nr] = nth_page(start_page, nr); return nr; } @@ -2820,8 +2823,8 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, /* hugepages are never "special" */ VM_BUG_ON(!pfn_valid(pte_pfn(pte))); - page = nth_page(pte_page(pte), (addr & (sz - 1)) >> PAGE_SHIFT); - refs = record_subpages(page, addr, end, pages + *nr); + page = pte_page(pte); + refs = record_subpages(page, sz, addr, end, pages + *nr); folio = try_grab_folio(page, refs, flags); if (!folio) @@ -2894,8 +2897,8 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, pages, nr); } - page = nth_page(pmd_page(orig), (addr & ~PMD_MASK) >> PAGE_SHIFT); - refs = record_subpages(page, addr, end, pages + *nr); + page = pmd_page(orig); + refs = record_subpages(page, PMD_SIZE, addr, end, pages + *nr); folio = try_grab_folio(page, refs, flags); if (!folio) @@ -2938,8 +2941,8 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, pages, nr); } - page = nth_page(pud_page(orig), (addr & ~PUD_MASK) >> PAGE_SHIFT); - refs = record_subpages(page, addr, end, pages + *nr); + page = pud_page(orig); + refs = record_subpages(page, PUD_SIZE, addr, end, pages + *nr); folio = try_grab_folio(page, refs, flags); if (!folio) @@ -2978,8 +2981,8 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr, BUILD_BUG_ON(pgd_devmap(orig)); - page = nth_page(pgd_page(orig), (addr & ~PGDIR_MASK) >> PAGE_SHIFT); - refs = record_subpages(page, addr, end, pages + *nr); + page = pgd_page(orig); + refs = record_subpages(page, PGDIR_SIZE, addr, end, pages + *nr); folio = try_grab_folio(page, refs, flags); if (!folio) From patchwork Wed Mar 27 15:23:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13606829 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6EA68C47DD9 for ; Wed, 27 Mar 2024 15:25:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=lyV9jumnv3Ps/VFx7VmBVrueJBW9w1cZC+v3WSDtFZ8=; b=dYL1uJVgO+qjMl QuLV7Q1BTrFkZqZlzM4nZKOOoSMvPAmAH7yzTUl06uirmivIa23lpiTvuAcaFsrsj2p6hO5B0VLaW +U+byB7hL63LQVOfzJzX80sDAWpvR18SfQyftJYbBfmA6xbv/El52aVL1sRDwkBckEmf2gkqib75d 1VRP9ePaJ0+553yT68TioouneJvBsXR0f/ut+0O6hDELnXwedAV3pyZP7pn0P/u0vvxBGD3iCWEYh vLc4u73mY/EsZTaBNxx+HP24iNmxc0XWe9o2l8SP4e0KQubGTHABllAdOH0B9smc9ZyClw9X3oLou xeRHUD4Vk6KPIR5fESmA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV9a-00000009mQq-0Ej1; Wed, 27 Mar 2024 15:25:34 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV88-00000009lRA-2lgv for linux-arm-kernel@lists.infradead.org; Wed, 27 Mar 2024 15:24:00 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553035; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2/UoYTg8lz4bAl/L4a1cbQCJuXO8WYRlAVPmeKHJK2E=; b=De2WPSLDQBlryuPjCl/FjvTSJQdZboyR9+Z0CJEllaeg7GCX/IxfdKPDsm1xQ6TWFSwe4A q577qKZm5SRfqW1bp4tJ0pDkwCWRoVE3teBMkenAQVJtVWUKCTUeL1nWFGmTBVFMP273dx 5/P8nZNjJV8wHbM/Xw7leV7MJJGuCII= Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-216-o3Zqh-rmM9SLbKTj5YI2_Q-1; Wed, 27 Mar 2024 11:23:53 -0400 X-MC-Unique: o3Zqh-rmM9SLbKTj5YI2_Q-1 Received: by mail-qk1-f197.google.com with SMTP id af79cd13be357-78a5e62931cso61837185a.0 for ; Wed, 27 Mar 2024 08:23:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711553033; x=1712157833; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2/UoYTg8lz4bAl/L4a1cbQCJuXO8WYRlAVPmeKHJK2E=; b=YburnksSrQ0wFz6rHSE3jKZLUSEl7aS9YzEDxWcA3usOGhso4ZHv0TgL3P6uGjyCeN mRy5WfWLyrU/eTbZipeQMcNQ/0sVRUA/r0M7zMNY3TkOJ1popQ3UBXVqEzS30CfoRJp/ sa0WGk98t0Z+wrJc/HGqqXAOMRwC+lJufAOhA9rWal41zsDGzIJyxPi0XyLxsxkWsNKv 4F2rPHfOXFhOUZzMqT/nl8EozuVeUXtHu4NoJ/LdBpxOZiQkQLgcJhCdN79eou2VQkv9 DjVXyiEEyTpkcy7C0Qnq90R9yMMLmBDjrWvekVq+oUh/ZSexdlVvNDxSb8F0fcievWtq 96dA== X-Forwarded-Encrypted: i=1; AJvYcCW4PZs+zHzAvq/QwdA327WWIRsDw/LzQ3a/i9OP6127r/MXVFgFXkmivHXah1wr0GO4u6CJm5twcmg7nim/UB+Syz6/5Tx/qD2eeajcw9h0fe3OEzE= X-Gm-Message-State: AOJu0YwKBuU3UHR5V7FHdb5YT+N4j7hDYEnMJp565wnbK19zOuWYJUl2 gmuHjqsZXnJQrfoUhlzOdIXg1ZMOQXPciuFUByrvbQO3ddzOXKVioW3zqPesyvvnKN4DRLK6RSw YpdHbosbccMwHH4JMo1C3o1Sq9ST4/66sfYByEAWugU0gzuxQ8if1xy5mz6/H7FfgsN9vkUTH X-Received: by 2002:a05:6214:4a8a:b0:696:ad00:7deb with SMTP id pi10-20020a0562144a8a00b00696ad007debmr2078367qvb.2.1711553033244; Wed, 27 Mar 2024 08:23:53 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH8Fh1FPXUmWlSE4Ssu3coIMvek7Et1xEVKx+8QTY39hYxqq3mRFRgZ7bz+i1V9Jmd/zYMv6Q== X-Received: by 2002:a05:6214:4a8a:b0:696:ad00:7deb with SMTP id pi10-20020a0562144a8a00b00696ad007debmr2078325qvb.2.1711553032687; Wed, 27 Mar 2024 08:23:52 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id hu4-20020a056214234400b00690dd47a41csm6412639qvb.86.2024.03.27.08.23.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Mar 2024 08:23:52 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Yang Shi , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Michael Ellerman , peterx@redhat.com, Andrew Jones , Muchun Song , linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Christophe Leroy , Andrew Morton , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox , Rik van Riel , linux-arm-kernel@lists.infradead.org, Andrea Arcangeli , David Hildenbrand , "Aneesh Kumar K . V" , Vlastimil Babka , James Houghton , Jason Gunthorpe , Mike Rapoport , Axel Rasmussen Subject: [PATCH v4 08/13] mm/gup: Handle hugetlb for no_page_table() Date: Wed, 27 Mar 2024 11:23:27 -0400 Message-ID: <20240327152332.950956-9-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240327152332.950956-1-peterx@redhat.com> References: <20240327152332.950956-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240327_082356_997613_9D830916 X-CRM114-Status: GOOD ( 15.90 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Peter Xu no_page_table() is not yet used for hugetlb code paths. Make it prepared. The major difference here is hugetlb will return -EFAULT as long as page cache does not exist, even if VM_SHARED. See hugetlb_follow_page_mask(). Pass "address" into no_page_table() too, as hugetlb will need it. Reviewed-by: Christoph Hellwig Reviewed-by: Jason Gunthorpe Signed-off-by: Peter Xu --- mm/gup.c | 44 ++++++++++++++++++++++++++------------------ 1 file changed, 26 insertions(+), 18 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index c2881772216b..ef46a7053e16 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -501,19 +501,27 @@ static inline void mm_set_has_pinned_flag(unsigned long *mm_flags) #ifdef CONFIG_MMU static struct page *no_page_table(struct vm_area_struct *vma, - unsigned int flags) + unsigned int flags, unsigned long address) { + if (!(flags & FOLL_DUMP)) + return NULL; + /* - * When core dumping an enormous anonymous area that nobody - * has touched so far, we don't want to allocate unnecessary pages or + * When core dumping, we don't want to allocate unnecessary pages or * page tables. Return error instead of NULL to skip handle_mm_fault, * then get_dump_page() will return NULL to leave a hole in the dump. * But we can only make this optimization where a hole would surely * be zero-filled if handle_mm_fault() actually did handle it. */ - if ((flags & FOLL_DUMP) && - (vma_is_anonymous(vma) || !vma->vm_ops->fault)) + if (is_vm_hugetlb_page(vma)) { + struct hstate *h = hstate_vma(vma); + + if (!hugetlbfs_pagecache_present(h, vma, address)) + return ERR_PTR(-EFAULT); + } else if ((vma_is_anonymous(vma) || !vma->vm_ops->fault)) { return ERR_PTR(-EFAULT); + } + return NULL; } @@ -593,7 +601,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, ptep = pte_offset_map_lock(mm, pmd, address, &ptl); if (!ptep) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); pte = ptep_get(ptep); if (!pte_present(pte)) goto no_page; @@ -685,7 +693,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, pte_unmap_unlock(ptep, ptl); if (!pte_none(pte)) return NULL; - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } static struct page *follow_pmd_mask(struct vm_area_struct *vma, @@ -701,27 +709,27 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, pmd = pmd_offset(pudp, address); pmdval = pmdp_get_lockless(pmd); if (pmd_none(pmdval)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); if (!pmd_present(pmdval)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); if (pmd_devmap(pmdval)) { ptl = pmd_lock(mm, pmd); page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap); spin_unlock(ptl); if (page) return page; - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } if (likely(!pmd_trans_huge(pmdval))) return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); if (pmd_protnone(pmdval) && !gup_can_follow_protnone(vma, flags)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); ptl = pmd_lock(mm, pmd); if (unlikely(!pmd_present(*pmd))) { spin_unlock(ptl); - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } if (unlikely(!pmd_trans_huge(*pmd))) { spin_unlock(ptl); @@ -752,17 +760,17 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma, pud = pud_offset(p4dp, address); if (pud_none(*pud)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); if (pud_devmap(*pud)) { ptl = pud_lock(mm, pud); page = follow_devmap_pud(vma, address, pud, flags, &ctx->pgmap); spin_unlock(ptl); if (page) return page; - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } if (unlikely(pud_bad(*pud))) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); return follow_pmd_mask(vma, address, pud, flags, ctx); } @@ -777,10 +785,10 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma, p4dp = p4d_offset(pgdp, address); p4d = READ_ONCE(*p4dp); if (!p4d_present(p4d)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); BUILD_BUG_ON(p4d_leaf(p4d)); if (unlikely(p4d_bad(p4d))) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); return follow_pud_mask(vma, address, p4dp, flags, ctx); } @@ -830,7 +838,7 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, pgd = pgd_offset(mm, address); if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); return follow_p4d_mask(vma, address, pgd, flags, ctx); } From patchwork Wed Mar 27 15:23:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13606830 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7F9E7C47DD9 for ; Wed, 27 Mar 2024 15:25:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=aFZItpgFZTkHpCe6Zidnlx3rV2eQtNvOYdhlehgGrSo=; b=HFXSKDhiv8nfDq YvaBfC07GBBYgRs1onF2Ne0iep7JVDYI+7/3OEv8YYcNi7s+hhaDwQaHE5S+irw3hvEV7Zcn9Wip2 qymY4t4mCVRuHkE7nZU/hhL/JIKV7Si3G8tx/QsqmeCUMa/qE/JhMCDoZ50KALjwSGdlhK/Cf+DAE z0mKziAsbJXWPkYZo9sk1hWpuV+P9ebM1X5mjj7Fw9kFZjYiTV94SH1MOpLLOH7giTSlLt5LS8iSJ guk0ulok+Mgw4kanRQL+7W0sZNbkYeAR8IGvOtTCtbmuZbUdQOqnGrKV9mronXe0l9yKTPtcxrFnx qZFJd1/iAOIWIhgLtSRg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV9q-00000009mYg-3d1k; Wed, 27 Mar 2024 15:25:42 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV8F-00000009lYC-00yw for linux-arm-kernel@bombadil.infradead.org; Wed, 27 Mar 2024 15:24:03 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Content-Transfer-Encoding :MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Sender:Reply-To:Content-ID:Content-Description; bh=0ClTZB7edrSwahK3rulsC63VROBcD1QOdNuRXa61Ddg=; b=P/pgUn6Colzv/0Sc+BviHQsBqX e6BPlnCBA0pq8hKHb3aUqYsvz3YrS4DkNKq96eM4CKt8d0UnF1vBq1LGBiYn15YN9uFYf6lipBEgm Prqc7xySuRPxBSputKrO+nV2+Lw1Q8XqrTSODdr/BW6q5LEhZQO2xA03a+N9zHWuzANxsAWyn+vL2 QdffFlZWUJvdPcIfouYwv1WhR3mejWojGpnTxG712cTWbv3BXMoQDtJDcjJrpBlF0SgYBZa4+TcRz f8a1mPI+a3ahwxf1uT5ZdpRLzGOLaX9wwuYWxxtZGeQn8pIwVL5YQVMX7c2kDuh/nwpsy2RRy2mxI q5/+omAg==; Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by desiato.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV8A-00000000L3j-3631 for linux-arm-kernel@lists.infradead.org; Wed, 27 Mar 2024 15:24:01 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553037; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0ClTZB7edrSwahK3rulsC63VROBcD1QOdNuRXa61Ddg=; b=eey1M4R5QdNvghzCyAdpKgxR3UFQrRHlhz9ErR5bJO0cPAVauYXEwkGN6HpoM+aBd2wFoC 9wWwEbFNUz4HELmc1fbrJj3MTjTXVnOwYJobAwPCNJH7XJatZhm9aJmpUAyKkjBVnpaElq OCsQp2h8LA6Ph/R33xnjDmPxFILHH+o= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553037; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0ClTZB7edrSwahK3rulsC63VROBcD1QOdNuRXa61Ddg=; b=eey1M4R5QdNvghzCyAdpKgxR3UFQrRHlhz9ErR5bJO0cPAVauYXEwkGN6HpoM+aBd2wFoC 9wWwEbFNUz4HELmc1fbrJj3MTjTXVnOwYJobAwPCNJH7XJatZhm9aJmpUAyKkjBVnpaElq OCsQp2h8LA6Ph/R33xnjDmPxFILHH+o= Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-651-ynvpY_vwP-S0y_dDERQdZg-1; Wed, 27 Mar 2024 11:23:55 -0400 X-MC-Unique: ynvpY_vwP-S0y_dDERQdZg-1 Received: by mail-qv1-f69.google.com with SMTP id 6a1803df08f44-69672754479so14007116d6.1 for ; Wed, 27 Mar 2024 08:23:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711553035; x=1712157835; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0ClTZB7edrSwahK3rulsC63VROBcD1QOdNuRXa61Ddg=; b=ATJ1f0pXXWcquXj5zSxVcl8a5gEWA5lXjToRx8UiKeNIKB6zeo4i7cWNT0tobmvSY2 KuxkBAlpn89+sjpHQvsO1pcRnl4bUORkWOOvLZhRHbtYV5JS3j1+okLXBjWeGRztlpnu 7QWpgSVEcbdVj/F34FkOpXz/dzJtnXt7k9sNAXOxuHd2oO3jxjnAhP7MXnnb5M634pBw nGK/9js5Q1YYuwiR3o6D+4cduIROQug7OE1ARCcqMcOztxH8KstJt1eTrLvgY31Pb5pu lRcn7lpe082ObIp+9pmTBQbMYtWB161HoFCdtwbZF6Mc+UVNKKQ1uEpYLyJ4L8TBrJTi kmjg== X-Forwarded-Encrypted: i=1; AJvYcCXbKRR6VqBEBMTEwMoteYx6heicLNYcUR8PwZzsPsNg03mTHkVjMEbTS7EzZwDHWpLemvONvjM/A/unaiIxsMfyyfUu8vuHknnhP6ZvoM+VgTL7wcY= X-Gm-Message-State: AOJu0Ywkw++QBt7MUDcKxi4X7Ytq569sfAEKsSGmliPnNHEmJV7VuDOy RfhzmO3G+iux9rb5Gf15SJOQR+eRCLO9QJd5qEqgMwcwoVPRwOwuRGfCCsIRQjIZqeZHGU6grlQ mvEEbilBS+/6MYrOnhnOEMLFPlkjQbP3DeWPxSmy73l7IvNZFJiU2vpqA/KtcSrYblauOgqKJ X-Received: by 2002:a05:6214:3d8c:b0:696:1892:c19f with SMTP id om12-20020a0562143d8c00b006961892c19fmr15035544qvb.3.1711553035017; Wed, 27 Mar 2024 08:23:55 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFT3GjlhPAIM0f0tWP3PNax9bFYBunh5winrPqMP8FazkZOSJ/P00xdKeX4lX2Up3S+10Z8OQ== X-Received: by 2002:a05:6214:3d8c:b0:696:1892:c19f with SMTP id om12-20020a0562143d8c00b006961892c19fmr15035509qvb.3.1711553034516; Wed, 27 Mar 2024 08:23:54 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id hu4-20020a056214234400b00690dd47a41csm6412639qvb.86.2024.03.27.08.23.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Mar 2024 08:23:54 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Yang Shi , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Michael Ellerman , peterx@redhat.com, Andrew Jones , Muchun Song , linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Christophe Leroy , Andrew Morton , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox , Rik van Riel , linux-arm-kernel@lists.infradead.org, Andrea Arcangeli , David Hildenbrand , "Aneesh Kumar K . V" , Vlastimil Babka , James Houghton , Jason Gunthorpe , Mike Rapoport , Axel Rasmussen Subject: [PATCH v4 09/13] mm/gup: Cache *pudp in follow_pud_mask() Date: Wed, 27 Mar 2024 11:23:28 -0400 Message-ID: <20240327152332.950956-10-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240327152332.950956-1-peterx@redhat.com> References: <20240327152332.950956-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240327_152359_455521_3E264312 X-CRM114-Status: GOOD ( 14.30 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Peter Xu Introduce "pud_t pud" in the function, so the code won't dereference *pudp multiple time. Not only because that looks less straightforward, but also because if the dereference really happened, it's not clear whether there can be race to see different *pudp values if it's being modified at the same time. Acked-by: James Houghton Reviewed-by: Jason Gunthorpe Signed-off-by: Peter Xu --- mm/gup.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index ef46a7053e16..26b8cca24077 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -753,26 +753,27 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma, unsigned int flags, struct follow_page_context *ctx) { - pud_t *pud; + pud_t *pudp, pud; spinlock_t *ptl; struct page *page; struct mm_struct *mm = vma->vm_mm; - pud = pud_offset(p4dp, address); - if (pud_none(*pud)) + pudp = pud_offset(p4dp, address); + pud = READ_ONCE(*pudp); + if (pud_none(pud)) return no_page_table(vma, flags, address); - if (pud_devmap(*pud)) { - ptl = pud_lock(mm, pud); - page = follow_devmap_pud(vma, address, pud, flags, &ctx->pgmap); + if (pud_devmap(pud)) { + ptl = pud_lock(mm, pudp); + page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap); spin_unlock(ptl); if (page) return page; return no_page_table(vma, flags, address); } - if (unlikely(pud_bad(*pud))) + if (unlikely(pud_bad(pud))) return no_page_table(vma, flags, address); - return follow_pmd_mask(vma, address, pud, flags, ctx); + return follow_pmd_mask(vma, address, pudp, flags, ctx); } static struct page *follow_p4d_mask(struct vm_area_struct *vma, From patchwork Wed Mar 27 15:23:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13606831 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4CFDAC54E67 for ; Wed, 27 Mar 2024 15:26:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=a/3i2arkSiINLK0r3jcabdqHJRkJ9KqmgRddsnXnpy4=; b=aNoY4QjjoS+1T4 JW0GLcxa+WAm/kq08MLbqh51NImyIktao2GUwaUjIWJ3hmTDDpJs7qcVFZZI2hU+gydpC9rrVTfC5 Xjo4LKR0q36d/H7DvQaapAWimLwi8c3W53IEHTZndQr56GePPPKOz2JWBFPpLQb96B8QlEH7cyeDY 7fWtmhuTEWj0CyPzQFxPKY6rmXSH81IrVpJ4844GARGKmxE1Rxhpc7CG7gHYUVgL5LPdnCtbPgd0J lJzlpXqCyOmCN6MdzXLD29yPBasMiPP6vIhBnsJ5Ae/6XAtKD4uWRFxM593UUdWaRwXT2viC3FFXW J6LFBhVKm9ihfRgyhjJQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV9x-00000009me8-01Wm; Wed, 27 Mar 2024 15:25:49 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV8G-00000009lZY-2wvk for linux-arm-kernel@bombadil.infradead.org; Wed, 27 Mar 2024 15:24:04 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Content-Transfer-Encoding :MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Sender:Reply-To:Content-ID:Content-Description; bh=ldvmB2LhfsIRddCCGOUltuspZLyBYJ6Ab85tSc3WzTs=; b=LTUyvo0LRlRATRKNWf/qdTPYBR j+1uij7Oi2bxqohBLcNyaqO5SY2j6OXYrg81sIlyINYWlj+urM1ktGOA3frUu/jA6fbuD+nUXsPf8 58oPDgeNxozzjp62ty3AIN/Zsq2R9QI4PwrrKxYji3bBJRFvl5eeMLtyCIk/sVDLfHTyaRMd4GZGQ 7hwsFQ6qzki6Y1V+KQbzENRLbtrwQvd41SgWoMU0AdtZhZn9Wan0BTYav67/WPJQ5rLMmcbHdcHi+ 0PM2vB0i4Kbyt+kMVZrdHzoqnE1WIDjuvN7leV9Awy6rzNMnj6VfNnaxMiGz1YG2j0PNwGMHTaSti XArfWM4Q==; Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by desiato.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV8C-00000000L5A-1CTY for linux-arm-kernel@lists.infradead.org; Wed, 27 Mar 2024 15:24:03 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553039; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ldvmB2LhfsIRddCCGOUltuspZLyBYJ6Ab85tSc3WzTs=; b=BSGJl9wxcjjAjeAVZsrZtKz/inf8oY4eNgmD7SzvW9j8nX1r6qONCLUACsEcD9j+p6vZx5 bJf3lXQ1oePSHc+cLgmQW+EBgjyHATvgXsiQO2kQvT0cwBCVyiWVGXTarhhLhQ+bpSKDBA hZWmAnR3ci3QXQuYPOW46iUQhgBcBsM= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553039; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ldvmB2LhfsIRddCCGOUltuspZLyBYJ6Ab85tSc3WzTs=; b=BSGJl9wxcjjAjeAVZsrZtKz/inf8oY4eNgmD7SzvW9j8nX1r6qONCLUACsEcD9j+p6vZx5 bJf3lXQ1oePSHc+cLgmQW+EBgjyHATvgXsiQO2kQvT0cwBCVyiWVGXTarhhLhQ+bpSKDBA hZWmAnR3ci3QXQuYPOW46iUQhgBcBsM= Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-169-ynotQTzRMtqeu-Zk4YfViA-1; Wed, 27 Mar 2024 11:23:57 -0400 X-MC-Unique: ynotQTzRMtqeu-Zk4YfViA-1 Received: by mail-qv1-f69.google.com with SMTP id 6a1803df08f44-690ab2b0de1so15530886d6.0 for ; Wed, 27 Mar 2024 08:23:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711553037; x=1712157837; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ldvmB2LhfsIRddCCGOUltuspZLyBYJ6Ab85tSc3WzTs=; b=vyCDb8UBFunoFlg5L7473Ygy3KF9pAkZWb67SUQRd74uF5YIYgYBUlUi7aCFX+UcGt RTCNyZYY6TylJOVAtpIZcN3zEro2DnrezyZ4mCX7GeTLrVofIxgOvVo7LGLTts7g1E57 3Wk4yjVaeYPAMuU82Ei/CHCsWrFFcjhMHmaCtlaXHabMXRtiLTLXl2Ak5+T/de9eA1ju V3ZK7llUfMcQ18915c1kqK4N0oWhNaKEwWp03qlE9Fk6q372LYYIGbX0K/h3eL4zzcq4 /4KJHQI2fgE/8Zrq12w7FM8ym9FD22EoQ4I2bpxNVZTZeQb+111jRhFTSOhBiYTfck62 BC0g== X-Forwarded-Encrypted: i=1; AJvYcCVe4sg4xH5RgX+GtJHj1dfoAd0kN6Q8Io4yL3P7His22td7zlvHKbMblP7jVjdu8IqdNEKpQ/SBkuVnxoeVPXn70yfWGTO2d26ezDEWj8m6UunDxF8= X-Gm-Message-State: AOJu0YyaVrjQJKzIZEyDq7OV+CEJLLs4bW1hY3q9OdaiMLqTce3VvHqe LeCebnXX7e7Hhn2Qrp8Au3Khcmmnlihn+SUeFpPI83AaHBmsiNvYXLGrzqK/LELXplHgmUVmrG4 DFVSc9YekEjsfaxmVdUDtdY4iV7cbjXldjararoE5MTZTaa74fd6us//s42NZmNoxT67zVZCs X-Received: by 2002:a05:6214:3d8c:b0:696:6f95:4421 with SMTP id om12-20020a0562143d8c00b006966f954421mr14385978qvb.1.1711553036984; Wed, 27 Mar 2024 08:23:56 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEnSeTu/B6EMntVMOCEyhcTmSAZJByvhuxAj5BPZo0z5woD7z6vaVN6Kd8/ZBw0a5bsOP9d6g== X-Received: by 2002:a05:6214:3d8c:b0:696:6f95:4421 with SMTP id om12-20020a0562143d8c00b006966f954421mr14385952qvb.1.1711553036462; Wed, 27 Mar 2024 08:23:56 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id hu4-20020a056214234400b00690dd47a41csm6412639qvb.86.2024.03.27.08.23.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Mar 2024 08:23:56 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Yang Shi , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Michael Ellerman , peterx@redhat.com, Andrew Jones , Muchun Song , linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Christophe Leroy , Andrew Morton , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox , Rik van Riel , linux-arm-kernel@lists.infradead.org, Andrea Arcangeli , David Hildenbrand , "Aneesh Kumar K . V" , Vlastimil Babka , James Houghton , Jason Gunthorpe , Mike Rapoport , Axel Rasmussen Subject: [PATCH v4 10/13] mm/gup: Handle huge pud for follow_pud_mask() Date: Wed, 27 Mar 2024 11:23:29 -0400 Message-ID: <20240327152332.950956-11-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240327152332.950956-1-peterx@redhat.com> References: <20240327152332.950956-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240327_152400_606462_1A512D92 X-CRM114-Status: GOOD ( 25.64 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Peter Xu Teach follow_pud_mask() to be able to handle normal PUD pages like hugetlb. Rename follow_devmap_pud() to follow_huge_pud() so that it can process either huge devmap or hugetlb. Move it out of TRANSPARENT_HUGEPAGE_PUD and and huge_memory.c (which relies on CONFIG_THP). Switch to pud_leaf() to detect both cases in the slow gup. In the new follow_huge_pud(), taking care of possible CoR for hugetlb if necessary. touch_pud() needs to be moved out of huge_memory.c to be accessable from gup.c even if !THP. Since at it, optimize the non-present check by adding a pud_present() early check before taking the pgtable lock, failing the follow_page() early if PUD is not present: that is required by both devmap or hugetlb. Use pud_huge() to also cover the pud_devmap() case. One more trivial thing to mention is, introduce "pud_t pud" in the code paths along the way, so the code doesn't dereference *pudp multiple time. Not only because that looks less straightforward, but also because if the dereference really happened, it's not clear whether there can be race to see different *pudp values when it's being modified at the same time. Setting ctx->page_mask properly for a PUD entry. As a side effect, this patch should also be able to optimize devmap GUP on PUD to be able to jump over the whole PUD range, but not yet verified. Hugetlb already can do so prior to this patch. Reviewed-by: Jason Gunthorpe Signed-off-by: Peter Xu --- include/linux/huge_mm.h | 8 ----- mm/gup.c | 70 +++++++++++++++++++++++++++++++++++++++-- mm/huge_memory.c | 47 ++------------------------- mm/internal.h | 2 ++ 4 files changed, 71 insertions(+), 56 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index d3bb25c39482..3f36511bdc02 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -351,8 +351,6 @@ static inline bool folio_test_pmd_mappable(struct folio *folio) struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd, int flags, struct dev_pagemap **pgmap); -struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr, - pud_t *pud, int flags, struct dev_pagemap **pgmap); vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf); @@ -507,12 +505,6 @@ static inline struct page *follow_devmap_pmd(struct vm_area_struct *vma, return NULL; } -static inline struct page *follow_devmap_pud(struct vm_area_struct *vma, - unsigned long addr, pud_t *pud, int flags, struct dev_pagemap **pgmap) -{ - return NULL; -} - static inline bool thp_migration_supported(void) { return false; diff --git a/mm/gup.c b/mm/gup.c index 26b8cca24077..1e5d42211bb4 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -525,6 +525,70 @@ static struct page *no_page_table(struct vm_area_struct *vma, return NULL; } +#ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES +static struct page *follow_huge_pud(struct vm_area_struct *vma, + unsigned long addr, pud_t *pudp, + int flags, struct follow_page_context *ctx) +{ + struct mm_struct *mm = vma->vm_mm; + struct page *page; + pud_t pud = *pudp; + unsigned long pfn = pud_pfn(pud); + int ret; + + assert_spin_locked(pud_lockptr(mm, pudp)); + + if ((flags & FOLL_WRITE) && !pud_write(pud)) + return NULL; + + if (!pud_present(pud)) + return NULL; + + pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT; + + if (IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) && + pud_devmap(pud)) { + /* + * device mapped pages can only be returned if the caller + * will manage the page reference count. + * + * At least one of FOLL_GET | FOLL_PIN must be set, so + * assert that here: + */ + if (!(flags & (FOLL_GET | FOLL_PIN))) + return ERR_PTR(-EEXIST); + + if (flags & FOLL_TOUCH) + touch_pud(vma, addr, pudp, flags & FOLL_WRITE); + + ctx->pgmap = get_dev_pagemap(pfn, ctx->pgmap); + if (!ctx->pgmap) + return ERR_PTR(-EFAULT); + } + + page = pfn_to_page(pfn); + + if (!pud_devmap(pud) && !pud_write(pud) && + gup_must_unshare(vma, flags, page)) + return ERR_PTR(-EMLINK); + + ret = try_grab_page(page, flags); + if (ret) + page = ERR_PTR(ret); + else + ctx->page_mask = HPAGE_PUD_NR - 1; + + return page; +} +#else /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */ +static struct page *follow_huge_pud(struct vm_area_struct *vma, + unsigned long addr, pud_t *pudp, + int flags, struct follow_page_context *ctx) +{ + return NULL; +} +#endif /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */ + static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, pte_t *pte, unsigned int flags) { @@ -760,11 +824,11 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma, pudp = pud_offset(p4dp, address); pud = READ_ONCE(*pudp); - if (pud_none(pud)) + if (!pud_present(pud)) return no_page_table(vma, flags, address); - if (pud_devmap(pud)) { + if (pud_leaf(pud)) { ptl = pud_lock(mm, pudp); - page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap); + page = follow_huge_pud(vma, address, pudp, flags, ctx); spin_unlock(ptl); if (page) return page; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index bc6fa82d9815..2979198d7b71 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1377,8 +1377,8 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, } #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD -static void touch_pud(struct vm_area_struct *vma, unsigned long addr, - pud_t *pud, bool write) +void touch_pud(struct vm_area_struct *vma, unsigned long addr, + pud_t *pud, bool write) { pud_t _pud; @@ -1390,49 +1390,6 @@ static void touch_pud(struct vm_area_struct *vma, unsigned long addr, update_mmu_cache_pud(vma, addr, pud); } -struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr, - pud_t *pud, int flags, struct dev_pagemap **pgmap) -{ - unsigned long pfn = pud_pfn(*pud); - struct mm_struct *mm = vma->vm_mm; - struct page *page; - int ret; - - assert_spin_locked(pud_lockptr(mm, pud)); - - if (flags & FOLL_WRITE && !pud_write(*pud)) - return NULL; - - if (pud_present(*pud) && pud_devmap(*pud)) - /* pass */; - else - return NULL; - - if (flags & FOLL_TOUCH) - touch_pud(vma, addr, pud, flags & FOLL_WRITE); - - /* - * device mapped pages can only be returned if the - * caller will manage the page reference count. - * - * At least one of FOLL_GET | FOLL_PIN must be set, so assert that here: - */ - if (!(flags & (FOLL_GET | FOLL_PIN))) - return ERR_PTR(-EEXIST); - - pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT; - *pgmap = get_dev_pagemap(pfn, *pgmap); - if (!*pgmap) - return ERR_PTR(-EFAULT); - page = pfn_to_page(pfn); - - ret = try_grab_page(page, flags); - if (ret) - page = ERR_PTR(ret); - - return page; -} - int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm, pud_t *dst_pud, pud_t *src_pud, unsigned long addr, struct vm_area_struct *vma) diff --git a/mm/internal.h b/mm/internal.h index 6c8d3844b6a3..eee8c82740b5 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1111,6 +1111,8 @@ int __must_check try_grab_page(struct page *page, unsigned int flags); /* * mm/huge_memory.c */ +void touch_pud(struct vm_area_struct *vma, unsigned long addr, + pud_t *pud, bool write); struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd, unsigned int flags); From patchwork Wed Mar 27 15:23:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13606833 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 59DD9CD1280 for ; Wed, 27 Mar 2024 15:26:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=cZ+NKar6lVgst3SR02YaLu32MuLsEv3l1ot9M8/4QKQ=; b=N2ZpfDLGovdsij 3Qp5gNerOQ0qSR2Wyk2vDcidlTnH4iT9qVAjfEQUaerE+wlP8qS+3f+p//9x0H23deu5hWIwWx87l EYvO9/2ovUw7uJaaA1aEDGBP1i8qUJrvRBD77B/FEb1Ufsi0+4uEVB/99YyszdD2eE26LsVzD4oxA pr2dE1GvKu3QKqr8vH4F0xBe9XDG2HI/whwWVseKqiLijVRTT4syMLMPw1mnU3nOci2zYx8gNX72D 2LameMBagXYjlcrnIMbKyrH/34EfvNu5HXgg/Q/gcEbv+iDotvJAtzl80HUKQTSHQxdAn/xUNGyLk FURmnbcltAal0hIa1+yw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpVA7-00000009mnY-3mAo; Wed, 27 Mar 2024 15:25:59 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV8M-00000009lct-12l7 for linux-arm-kernel@bombadil.infradead.org; Wed, 27 Mar 2024 15:24:10 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Content-Transfer-Encoding :MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Sender:Reply-To:Content-ID:Content-Description; bh=h7p860Pwfa5oWvGOHiVson4UsvvLX6Zsz0nCOpl6KoI=; b=lHL1GMvPieU/8rlkR5tPg8XH86 533dwmjIaFUEHk+vrLYL64GdIiLkcQaNXqeX6CyE+mdUcMAJsNbLvJNm3ntiyYPH4csr28xZeBlaw WL34vkNMwDZuOOjUxJxFT+sZ4+NJYT2k5u4X0qzdrfuIcvWOlYhxa7ss/+BrkCs+cOAI/lmUDwW05 ffxbFLRpWo1La9RyWZl+1Z3OLtVeOoTdS/UTxUv8QEmZiVaP3z3BSjULeg2XAHI7lbqICoUYUs7sM Cg/Z072L6CrUgkdrOZW3YYd7xsgEZAKpnoxYID020+soROmVsRZQZpKNvL5nb+xPeE1fefM30VCGE gyda7kVg==; Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by desiato.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV8I-00000000L7Z-3e48 for linux-arm-kernel@lists.infradead.org; Wed, 27 Mar 2024 15:24:08 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553045; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=h7p860Pwfa5oWvGOHiVson4UsvvLX6Zsz0nCOpl6KoI=; b=aR2vUTXE1JmyVrZPV7R2A9z7VaeQmXL4darXFLG/h8QavsZ8PmT1NEGAvwbMBgo9SjVkhn aumW98T9gkqdrNfMwuDd/q8LdMhJYKgwFQje7hT7JtmfF1ZGCL4/fW9zoxsbDIEKGCgcMy 8jXnbSkWup1j6uoW1RdPDyiUauYpDG0= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553046; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=h7p860Pwfa5oWvGOHiVson4UsvvLX6Zsz0nCOpl6KoI=; b=QaJvmE8MM0YrNH4VpkaHABCebX2l3WrIyx4xWze2vwuP4HxAuxNK8N5yliXI4J0r2sngER wDr59YgJmBHSiKEV0NctLCyWVHvzWRul8mPhB7cLWg9NFi+aco3I+Ycj/berQuwEQkCP4+ xld7y8W7dI1SzNpC6AeJtZcxVcLnBfo= Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-121-lT1gE3niM9e-2gTW2Yuc4A-1; Wed, 27 Mar 2024 11:24:01 -0400 X-MC-Unique: lT1gE3niM9e-2gTW2Yuc4A-1 Received: by mail-qv1-f69.google.com with SMTP id 6a1803df08f44-696a9c482a4so3596636d6.0 for ; Wed, 27 Mar 2024 08:24:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711553041; x=1712157841; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=h7p860Pwfa5oWvGOHiVson4UsvvLX6Zsz0nCOpl6KoI=; b=ZVcBLIXxcqZZRa/xur/me2l8tMbL0SD1e/YmDHcz3T6qh/0UdGMk3JOw14zDO3Ojp6 dl4hZEIo5M0MMuIoP7rBFd8cwf4haLmTqZOHfXAbkpiiGUvB1ycAUsuPBJ37G196Por/ gMPfbFVToCB03r6/IOqs2aeioXV+WZ85e39VQm4WYcM2F5p4F+Mpzs85pral8uJv4yaS yWwvPM5W4Wz7PjNM3GkbX+oEdY2YzsURO4W1B7TGcl5HRYwfV8kRwFr608u8RIwa5Jro rG3gXxZvmiiP0Ckccm6j7lKb/Xjl2drg8W7W/BkK34plazII6YlP+Br6pdRCrRhYdtWx ytDw== X-Forwarded-Encrypted: i=1; AJvYcCXUzj4IsqcsGCPZjUUQ35N8EhbMw+o75GQXka3+kyDSQcxfQqoJe4pvdSfy/7rgqrDkiOGz1TbQhKQ18Uh2zT0V5qdXGcH3TUPTH21PEAGOuQNL330= X-Gm-Message-State: AOJu0YzVQoCJoOeqsoiEqz/+6/XApFNQps1+aL0a4IliJn5O/RPjhrBu YQG/hqzZOeGLzwmxq0BOqCpGfxAjYAdrFHNCrApW0QnktcfyFxxXp4gQ+DpOwmCR/tqfLZV2msm UeSzQRoaIg7aZbk1FSj8QGaprDbv1nWk+UXNwmKYrR01LM7czM2D0rIAhGJDl3WPsjGQYXUlw X-Received: by 2002:a05:6214:3d13:b0:696:7b12:3744 with SMTP id ol19-20020a0562143d1300b006967b123744mr13941589qvb.0.1711553040352; Wed, 27 Mar 2024 08:24:00 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGf1gHn7pWGOliI2n3iseJ1Y4KDSmrGTREwaUpEtNei2O8hiTLwONxqZARC/3SfeDcLLHxfTA== X-Received: by 2002:a05:6214:3d13:b0:696:7b12:3744 with SMTP id ol19-20020a0562143d1300b006967b123744mr13941496qvb.0.1711553038757; Wed, 27 Mar 2024 08:23:58 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id hu4-20020a056214234400b00690dd47a41csm6412639qvb.86.2024.03.27.08.23.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Mar 2024 08:23:58 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Yang Shi , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Michael Ellerman , peterx@redhat.com, Andrew Jones , Muchun Song , linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Christophe Leroy , Andrew Morton , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox , Rik van Riel , linux-arm-kernel@lists.infradead.org, Andrea Arcangeli , David Hildenbrand , "Aneesh Kumar K . V" , Vlastimil Babka , James Houghton , Jason Gunthorpe , Mike Rapoport , Axel Rasmussen Subject: [PATCH v4 11/13] mm/gup: Handle huge pmd for follow_pmd_mask() Date: Wed, 27 Mar 2024 11:23:30 -0400 Message-ID: <20240327152332.950956-12-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240327152332.950956-1-peterx@redhat.com> References: <20240327152332.950956-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240327_152407_244319_360AECC3 X-CRM114-Status: GOOD ( 25.19 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Peter Xu Replace pmd_trans_huge() with pmd_leaf() to also cover pmd_huge() as long as enabled. FOLL_TOUCH and FOLL_SPLIT_PMD only apply to THP, not yet huge. Since now follow_trans_huge_pmd() can process hugetlb pages, renaming it into follow_huge_pmd() to match what it does. Move it into gup.c so not depend on CONFIG_THP. When at it, move the ctx->page_mask setup into follow_huge_pmd(), only set it when the page is valid. It was not a bug to set it before even if GUP failed (page==NULL), because follow_page_mask() callers always ignores page_mask if so. But doing so makes the code cleaner. Reviewed-by: Jason Gunthorpe Signed-off-by: Peter Xu --- mm/gup.c | 107 ++++++++++++++++++++++++++++++++++++++++++++--- mm/huge_memory.c | 86 +------------------------------------ mm/internal.h | 5 +-- 3 files changed, 105 insertions(+), 93 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 1e5d42211bb4..a81184b01276 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -580,6 +580,93 @@ static struct page *follow_huge_pud(struct vm_area_struct *vma, return page; } + +/* FOLL_FORCE can write to even unwritable PMDs in COW mappings. */ +static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page, + struct vm_area_struct *vma, + unsigned int flags) +{ + /* If the pmd is writable, we can write to the page. */ + if (pmd_write(pmd)) + return true; + + /* Maybe FOLL_FORCE is set to override it? */ + if (!(flags & FOLL_FORCE)) + return false; + + /* But FOLL_FORCE has no effect on shared mappings */ + if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED)) + return false; + + /* ... or read-only private ones */ + if (!(vma->vm_flags & VM_MAYWRITE)) + return false; + + /* ... or already writable ones that just need to take a write fault */ + if (vma->vm_flags & VM_WRITE) + return false; + + /* + * See can_change_pte_writable(): we broke COW and could map the page + * writable if we have an exclusive anonymous page ... + */ + if (!page || !PageAnon(page) || !PageAnonExclusive(page)) + return false; + + /* ... and a write-fault isn't required for other reasons. */ + if (vma_soft_dirty_enabled(vma) && !pmd_soft_dirty(pmd)) + return false; + return !userfaultfd_huge_pmd_wp(vma, pmd); +} + +static struct page *follow_huge_pmd(struct vm_area_struct *vma, + unsigned long addr, pmd_t *pmd, + unsigned int flags, + struct follow_page_context *ctx) +{ + struct mm_struct *mm = vma->vm_mm; + pmd_t pmdval = *pmd; + struct page *page; + int ret; + + assert_spin_locked(pmd_lockptr(mm, pmd)); + + page = pmd_page(pmdval); + VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page); + + if ((flags & FOLL_WRITE) && + !can_follow_write_pmd(pmdval, page, vma, flags)) + return NULL; + + /* Avoid dumping huge zero page */ + if ((flags & FOLL_DUMP) && is_huge_zero_pmd(pmdval)) + return ERR_PTR(-EFAULT); + + if (pmd_protnone(*pmd) && !gup_can_follow_protnone(vma, flags)) + return NULL; + + if (!pmd_write(pmdval) && gup_must_unshare(vma, flags, page)) + return ERR_PTR(-EMLINK); + + VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) && + !PageAnonExclusive(page), page); + + ret = try_grab_page(page, flags); + if (ret) + return ERR_PTR(ret); + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + if (pmd_trans_huge(pmdval) && (flags & FOLL_TOUCH)) + touch_pmd(vma, addr, pmd, flags & FOLL_WRITE); +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ + + page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT; + ctx->page_mask = HPAGE_PMD_NR - 1; + VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), page); + + return page; +} + #else /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */ static struct page *follow_huge_pud(struct vm_area_struct *vma, unsigned long addr, pud_t *pudp, @@ -587,6 +674,14 @@ static struct page *follow_huge_pud(struct vm_area_struct *vma, { return NULL; } + +static struct page *follow_huge_pmd(struct vm_area_struct *vma, + unsigned long addr, pmd_t *pmd, + unsigned int flags, + struct follow_page_context *ctx) +{ + return NULL; +} #endif /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, @@ -784,31 +879,31 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, return page; return no_page_table(vma, flags, address); } - if (likely(!pmd_trans_huge(pmdval))) + if (likely(!pmd_leaf(pmdval))) return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); if (pmd_protnone(pmdval) && !gup_can_follow_protnone(vma, flags)) return no_page_table(vma, flags, address); ptl = pmd_lock(mm, pmd); - if (unlikely(!pmd_present(*pmd))) { + pmdval = *pmd; + if (unlikely(!pmd_present(pmdval))) { spin_unlock(ptl); return no_page_table(vma, flags, address); } - if (unlikely(!pmd_trans_huge(*pmd))) { + if (unlikely(!pmd_leaf(pmdval))) { spin_unlock(ptl); return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); } - if (flags & FOLL_SPLIT_PMD) { + if (pmd_trans_huge(pmdval) && (flags & FOLL_SPLIT_PMD)) { spin_unlock(ptl); split_huge_pmd(vma, pmd, address); /* If pmd was left empty, stuff a page table in there quickly */ return pte_alloc(mm, pmd) ? ERR_PTR(-ENOMEM) : follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); } - page = follow_trans_huge_pmd(vma, address, pmd, flags); + page = follow_huge_pmd(vma, address, pmd, flags, ctx); spin_unlock(ptl); - ctx->page_mask = HPAGE_PMD_NR - 1; return page; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2979198d7b71..ed0d82c4b829 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1220,8 +1220,8 @@ vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write) EXPORT_SYMBOL_GPL(vmf_insert_pfn_pud); #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ -static void touch_pmd(struct vm_area_struct *vma, unsigned long addr, - pmd_t *pmd, bool write) +void touch_pmd(struct vm_area_struct *vma, unsigned long addr, + pmd_t *pmd, bool write) { pmd_t _pmd; @@ -1576,88 +1576,6 @@ static inline bool can_change_pmd_writable(struct vm_area_struct *vma, return pmd_dirty(pmd); } -/* FOLL_FORCE can write to even unwritable PMDs in COW mappings. */ -static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page, - struct vm_area_struct *vma, - unsigned int flags) -{ - /* If the pmd is writable, we can write to the page. */ - if (pmd_write(pmd)) - return true; - - /* Maybe FOLL_FORCE is set to override it? */ - if (!(flags & FOLL_FORCE)) - return false; - - /* But FOLL_FORCE has no effect on shared mappings */ - if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED)) - return false; - - /* ... or read-only private ones */ - if (!(vma->vm_flags & VM_MAYWRITE)) - return false; - - /* ... or already writable ones that just need to take a write fault */ - if (vma->vm_flags & VM_WRITE) - return false; - - /* - * See can_change_pte_writable(): we broke COW and could map the page - * writable if we have an exclusive anonymous page ... - */ - if (!page || !PageAnon(page) || !PageAnonExclusive(page)) - return false; - - /* ... and a write-fault isn't required for other reasons. */ - if (vma_soft_dirty_enabled(vma) && !pmd_soft_dirty(pmd)) - return false; - return !userfaultfd_huge_pmd_wp(vma, pmd); -} - -struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, - unsigned long addr, - pmd_t *pmd, - unsigned int flags) -{ - struct mm_struct *mm = vma->vm_mm; - struct page *page; - int ret; - - assert_spin_locked(pmd_lockptr(mm, pmd)); - - page = pmd_page(*pmd); - VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page); - - if ((flags & FOLL_WRITE) && - !can_follow_write_pmd(*pmd, page, vma, flags)) - return NULL; - - /* Avoid dumping huge zero page */ - if ((flags & FOLL_DUMP) && is_huge_zero_pmd(*pmd)) - return ERR_PTR(-EFAULT); - - if (pmd_protnone(*pmd) && !gup_can_follow_protnone(vma, flags)) - return NULL; - - if (!pmd_write(*pmd) && gup_must_unshare(vma, flags, page)) - return ERR_PTR(-EMLINK); - - VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) && - !PageAnonExclusive(page), page); - - ret = try_grab_page(page, flags); - if (ret) - return ERR_PTR(ret); - - if (flags & FOLL_TOUCH) - touch_pmd(vma, addr, pmd, flags & FOLL_WRITE); - - page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT; - VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), page); - - return page; -} - /* NUMA hinting page fault entry point for trans huge pmds */ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) { diff --git a/mm/internal.h b/mm/internal.h index eee8c82740b5..e10ecc6594f1 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1113,9 +1113,8 @@ int __must_check try_grab_page(struct page *page, unsigned int flags); */ void touch_pud(struct vm_area_struct *vma, unsigned long addr, pud_t *pud, bool write); -struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, - unsigned long addr, pmd_t *pmd, - unsigned int flags); +void touch_pmd(struct vm_area_struct *vma, unsigned long addr, + pmd_t *pmd, bool write); /* * mm/mmap.c From patchwork Wed Mar 27 15:23:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13606832 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E9526C47DD9 for ; Wed, 27 Mar 2024 15:26:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=f8spgxO3rxe35rEpEaV0v25nZAxndjx3cJ0QREoHcu0=; b=crwVPYIHo3UHj5 xE3Ey3uy1Sa3Hxu0MrvY0NDE/yfygqJO8UdiaWYyWKZoVVlD/rn0VOa9zMoosR6oYMVCVtiAHKI8C iV5/tNzinHn2vUfl+8ZSzRsJfs39CI3f1mbi4Tuz+l5Ceg2VkNrovd40HDIzXxvIuAM8xtPew43bg 9rMSj6U9JrciAmIu3g/DqIyOsDRILfR/Wm6eHhTt0VcVhPtTt+9ZFviJ5uin6OLtulBvfMIHWSadZ b0TBe5FyMMNX9NHXkTOARE+AP2w0BQwKk8jE1jpyCSO/rZAygfH8j5ZwTriddiZNRQCbDtSSckuOp /Z58Btb84npZnZS5o4pw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpVA2-00000009miw-49fB; Wed, 27 Mar 2024 15:25:55 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV8K-00000009lbX-0vAA for linux-arm-kernel@bombadil.infradead.org; Wed, 27 Mar 2024 15:24:09 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Content-Transfer-Encoding :MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Sender:Reply-To:Content-ID:Content-Description; bh=cjVJdBQT9/0ffkgcc4+ae3LmOxlOj4cpHe90WMWbkEY=; b=Z98IPcVvg3jOl+C/lgHaoci0AM OvAR7U5KSKvV63f5sh8J5gcxYWZBZrhInDPA1eRn5ThquoYKagDkZw7FRyvirQiRaPRRmn4kwnCIu 4CNgTJqK7JcsV3Z7eQ0uatA4WKHoWRvX3YRqvSuJE8pvHEnh/u1QkVqA3Ciwddj6NtrAbe4KmOhkB jqQiBl5XxjQXBpjpSefhxMGntE7GSrz8c8qw+RJYMNF6YWgKB/+ZxrJN2+m1EddknanBLqkAkK9X5 y9uTbjArNKYpD0qkjSxMj0CKMjItFE4VmN3ekYp0p0h7mpD3d1f7EYHxxNeOI/tyj1Pj5wtEgrap1 2FvSXx5w==; Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by desiato.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV8G-00000000L6x-3zIB for linux-arm-kernel@lists.infradead.org; Wed, 27 Mar 2024 15:24:07 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553043; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cjVJdBQT9/0ffkgcc4+ae3LmOxlOj4cpHe90WMWbkEY=; b=TfsyBFxEFrT6b4KrBNyvRlxbsZp4OZkyKTCpTtCBk8ymfeZAveKqzgIMEyCAe3HhiFN1iX 9RPvS+k4j1WEOo2hE5L/0oIrkRDWMr+YkbUWM4w1nfFimtZmvcBrt70q80HZ+Y8FQ/WyPf csy8AmVY5hwQZnLx30zb+Bx3vBcGXiE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553044; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cjVJdBQT9/0ffkgcc4+ae3LmOxlOj4cpHe90WMWbkEY=; b=bkVlbPT/n96gPLfQANW4RZiFw5TmeQiknJuMwo2gkvKINGyzokedwht1HnW/Vxc4Z4I93D Iz9gRbbXuh2BA768fJLXO/OQKgoVt3BmEJft7VikGfS+Sf2ZRqqomx12ooWyAAL7O4bqdg 7jjBoqYZhMLRZshhaMyd0POAWS3qOZE= Received: from mail-oi1-f198.google.com (mail-oi1-f198.google.com [209.85.167.198]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-550-d2mVF9JhNf-2YgYA6bZ1gw-1; Wed, 27 Mar 2024 11:24:02 -0400 X-MC-Unique: d2mVF9JhNf-2YgYA6bZ1gw-1 Received: by mail-oi1-f198.google.com with SMTP id 5614622812f47-3c3d6659b2cso133103b6e.1 for ; Wed, 27 Mar 2024 08:24:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711553041; x=1712157841; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cjVJdBQT9/0ffkgcc4+ae3LmOxlOj4cpHe90WMWbkEY=; b=afXBrpERWkS6Re37C4Kt2SwyUJYNcD53gojooBj9Yh5gFEZ2EWR3zMf181uQU6wIJb qA8U21a7VMx6kuopEDfevZIdNaXkwCf6CaA2fsTGF2iklYiLhPGocSuVD5dMeIMxdD9U l+5xirDS088+PpVk/mwNUNSLUd72/Ll7z+Q83FiQA19qx2nFO31dyChkQRugQU7j0191 d1J31ERZaOBzUYFeOi8E9elaBVSY9GDBZj13aOakOWFaitG8u62A6DeifWH41q9ud4aj yz7JE4RZnDemocOblp+Djay/lccEZY5f2CoIMvRHyhJha+9XrQYe57MTc0CcDatESSSc NnNA== X-Forwarded-Encrypted: i=1; AJvYcCW8YB/xn51hLqFdVP6zE4Ufl1O+kdyXImZTmdgDx7fW2pGUqlDSkJZiweMbY7j4nMalNqbLhYLHcqtH/l1W3glO7Cd3ImZg3v+BHmOIJVvLDHKbXdI= X-Gm-Message-State: AOJu0YyRg+0sPrMcGkaRWy/2y4wOZ4IDDxdB+ByIBEs78Ts7CImN0aiJ X69+iXC0QMkbDu+3wcUPsQ5DSLXllXJS3WRDZmbVovYgxS1ghSBTpMlYYf1Bm2t6xC2jxL9hEtJ y+PcRmnkP1sOj7Dy7LRka4tHMHxTLiW9//ATk2IidUla/eD4aYluzrJqvhzd0BvTd0LfdNnjn X-Received: by 2002:a05:6808:1829:b0:3c3:c913:2709 with SMTP id bh41-20020a056808182900b003c3c9132709mr293796oib.2.1711553041227; Wed, 27 Mar 2024 08:24:01 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHAanQflKq3pmTo1+xDGdmPcZlLuONtp7wiA2xhvVSTNiy0v1ZL/DhUjtngy2OWwjyBQX1+lw== X-Received: by 2002:a05:6808:1829:b0:3c3:c913:2709 with SMTP id bh41-20020a056808182900b003c3c9132709mr293761oib.2.1711553040651; Wed, 27 Mar 2024 08:24:00 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id hu4-20020a056214234400b00690dd47a41csm6412639qvb.86.2024.03.27.08.23.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Mar 2024 08:24:00 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Yang Shi , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Michael Ellerman , peterx@redhat.com, Andrew Jones , Muchun Song , linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Christophe Leroy , Andrew Morton , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox , Rik van Riel , linux-arm-kernel@lists.infradead.org, Andrea Arcangeli , David Hildenbrand , "Aneesh Kumar K . V" , Vlastimil Babka , James Houghton , Jason Gunthorpe , Mike Rapoport , Axel Rasmussen Subject: [PATCH v4 12/13] mm/gup: Handle hugepd for follow_page() Date: Wed, 27 Mar 2024 11:23:31 -0400 Message-ID: <20240327152332.950956-13-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240327152332.950956-1-peterx@redhat.com> References: <20240327152332.950956-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240327_152405_356573_EB5D23CF X-CRM114-Status: GOOD ( 27.32 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Peter Xu Hugepd is only used in PowerPC so far on 4K page size kernels where hash mmu is used. follow_page_mask() used to leverage hugetlb APIs to access hugepd entries. Teach follow_page_mask() itself on hugepd. With previous refactors on fast-gup gup_huge_pd(), most of the code can be leveraged. There's something not needed for follow page, for example, gup_hugepte() tries to detect pgtable entry change which will never happen with slow gup (which has the pgtable lock held), but that's not a problem to check. Since follow_page() always only fetch one page, set the end to "address + PAGE_SIZE" should suffice. We will still do the pgtable walk once for each hugetlb page by setting ctx->page_mask properly. One thing worth mentioning is that some level of pgtable's _bad() helper will report is_hugepd() entries as TRUE on Power8 hash MMUs. I think it at least applies to PUD on Power8 with 4K pgsize. It means feeding a hugepd entry to pud_bad() will report a false positive. Let's leave that for now because it can be arch-specific where I am a bit declined to touch. In this patch it's not a problem as long as hugepd is detected before any bad pgtable entries. To allow slow gup like follow_*_page() to access hugepd helpers, hugepd codes are moved to the top. Besides that, the helper record_subpages() will be used by either hugepd or fast-gup now. To avoid "unused function" warnings we must provide a "#ifdef" for it, unfortunately. Signed-off-by: Peter Xu --- mm/gup.c | 269 +++++++++++++++++++++++++++++++++---------------------- 1 file changed, 163 insertions(+), 106 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index a81184b01276..a02463c9420e 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -500,6 +500,149 @@ static inline void mm_set_has_pinned_flag(unsigned long *mm_flags) } #ifdef CONFIG_MMU + +#if defined(CONFIG_ARCH_HAS_HUGEPD) || defined(CONFIG_HAVE_FAST_GUP) +static int record_subpages(struct page *page, unsigned long sz, + unsigned long addr, unsigned long end, + struct page **pages) +{ + struct page *start_page; + int nr; + + start_page = nth_page(page, (addr & (sz - 1)) >> PAGE_SHIFT); + for (nr = 0; addr != end; nr++, addr += PAGE_SIZE) + pages[nr] = nth_page(start_page, nr); + + return nr; +} +#endif /* CONFIG_ARCH_HAS_HUGEPD || CONFIG_HAVE_FAST_GUP */ + +#ifdef CONFIG_ARCH_HAS_HUGEPD +static unsigned long hugepte_addr_end(unsigned long addr, unsigned long end, + unsigned long sz) +{ + unsigned long __boundary = (addr + sz) & ~(sz-1); + return (__boundary - 1 < end - 1) ? __boundary : end; +} + +static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, + unsigned long end, unsigned int flags, + struct page **pages, int *nr) +{ + unsigned long pte_end; + struct page *page; + struct folio *folio; + pte_t pte; + int refs; + + pte_end = (addr + sz) & ~(sz-1); + if (pte_end < end) + end = pte_end; + + pte = huge_ptep_get(ptep); + + if (!pte_access_permitted(pte, flags & FOLL_WRITE)) + return 0; + + /* hugepages are never "special" */ + VM_BUG_ON(!pfn_valid(pte_pfn(pte))); + + page = pte_page(pte); + refs = record_subpages(page, sz, addr, end, pages + *nr); + + folio = try_grab_folio(page, refs, flags); + if (!folio) + return 0; + + if (unlikely(pte_val(pte) != pte_val(ptep_get(ptep)))) { + gup_put_folio(folio, refs, flags); + return 0; + } + + if (!pte_write(pte) && gup_must_unshare(NULL, flags, &folio->page)) { + gup_put_folio(folio, refs, flags); + return 0; + } + + *nr += refs; + folio_set_referenced(folio); + return 1; +} + +/* + * NOTE: currently GUP for a hugepd is only possible on hugetlbfs file + * systems on Power, which does not have issue with folio writeback against + * GUP updates. When hugepd will be extended to support non-hugetlbfs or + * even anonymous memory, we need to do extra check as what we do with most + * of the other folios. See writable_file_mapping_allowed() and + * folio_fast_pin_allowed() for more information. + */ +static int gup_huge_pd(hugepd_t hugepd, unsigned long addr, + unsigned int pdshift, unsigned long end, unsigned int flags, + struct page **pages, int *nr) +{ + pte_t *ptep; + unsigned long sz = 1UL << hugepd_shift(hugepd); + unsigned long next; + + ptep = hugepte_offset(hugepd, addr, pdshift); + do { + next = hugepte_addr_end(addr, end, sz); + if (!gup_hugepte(ptep, sz, addr, end, flags, pages, nr)) + return 0; + } while (ptep++, addr = next, addr != end); + + return 1; +} + +static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd, + unsigned long addr, unsigned int pdshift, + unsigned int flags, + struct follow_page_context *ctx) +{ + struct page *page; + struct hstate *h; + spinlock_t *ptl; + int nr = 0, ret; + pte_t *ptep; + + /* Only hugetlb supports hugepd */ + if (WARN_ON_ONCE(!is_vm_hugetlb_page(vma))) + return ERR_PTR(-EFAULT); + + h = hstate_vma(vma); + ptep = hugepte_offset(hugepd, addr, pdshift); + ptl = huge_pte_lock(h, vma->vm_mm, ptep); + ret = gup_huge_pd(hugepd, addr, pdshift, addr + PAGE_SIZE, + flags, &page, &nr); + spin_unlock(ptl); + + if (ret) { + WARN_ON_ONCE(nr != 1); + ctx->page_mask = (1U << huge_page_order(h)) - 1; + return page; + } + + return NULL; +} +#else /* CONFIG_ARCH_HAS_HUGEPD */ +static inline int gup_huge_pd(hugepd_t hugepd, unsigned long addr, + unsigned int pdshift, unsigned long end, unsigned int flags, + struct page **pages, int *nr) +{ + return 0; +} + +static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd, + unsigned long addr, unsigned int pdshift, + unsigned int flags, + struct follow_page_context *ctx) +{ + return NULL; +} +#endif /* CONFIG_ARCH_HAS_HUGEPD */ + + static struct page *no_page_table(struct vm_area_struct *vma, unsigned int flags, unsigned long address) { @@ -871,6 +1014,9 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, return no_page_table(vma, flags, address); if (!pmd_present(pmdval)) return no_page_table(vma, flags, address); + if (unlikely(is_hugepd(__hugepd(pmd_val(pmdval))))) + return follow_hugepd(vma, __hugepd(pmd_val(pmdval)), + address, PMD_SHIFT, flags, ctx); if (pmd_devmap(pmdval)) { ptl = pmd_lock(mm, pmd); page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap); @@ -921,6 +1067,9 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma, pud = READ_ONCE(*pudp); if (!pud_present(pud)) return no_page_table(vma, flags, address); + if (unlikely(is_hugepd(__hugepd(pud_val(pud))))) + return follow_hugepd(vma, __hugepd(pud_val(pud)), + address, PUD_SHIFT, flags, ctx); if (pud_leaf(pud)) { ptl = pud_lock(mm, pudp); page = follow_huge_pud(vma, address, pudp, flags, ctx); @@ -944,10 +1093,13 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma, p4dp = p4d_offset(pgdp, address); p4d = READ_ONCE(*p4dp); - if (!p4d_present(p4d)) - return no_page_table(vma, flags, address); BUILD_BUG_ON(p4d_leaf(p4d)); - if (unlikely(p4d_bad(p4d))) + + if (unlikely(is_hugepd(__hugepd(p4d_val(p4d))))) + return follow_hugepd(vma, __hugepd(p4d_val(p4d)), + address, P4D_SHIFT, flags, ctx); + + if (!p4d_present(p4d) || p4d_bad(p4d)) return no_page_table(vma, flags, address); return follow_pud_mask(vma, address, p4dp, flags, ctx); @@ -997,10 +1149,15 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, pgd = pgd_offset(mm, address); - if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) - return no_page_table(vma, flags, address); + if (unlikely(is_hugepd(__hugepd(pgd_val(*pgd))))) + page = follow_hugepd(vma, __hugepd(pgd_val(*pgd)), + address, PGDIR_SHIFT, flags, ctx); + else if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) + page = no_page_table(vma, flags, address); + else + page = follow_p4d_mask(vma, address, pgd, flags, ctx); - return follow_p4d_mask(vma, address, pgd, flags, ctx); + return page; } struct page *follow_page(struct vm_area_struct *vma, unsigned long address, @@ -2947,106 +3104,6 @@ static int __gup_device_huge_pud(pud_t pud, pud_t *pudp, unsigned long addr, } #endif -static int record_subpages(struct page *page, unsigned long sz, - unsigned long addr, unsigned long end, - struct page **pages) -{ - struct page *start_page; - int nr; - - start_page = nth_page(page, (addr & (sz - 1)) >> PAGE_SHIFT); - for (nr = 0; addr != end; nr++, addr += PAGE_SIZE) - pages[nr] = nth_page(start_page, nr); - - return nr; -} - -#ifdef CONFIG_ARCH_HAS_HUGEPD -static unsigned long hugepte_addr_end(unsigned long addr, unsigned long end, - unsigned long sz) -{ - unsigned long __boundary = (addr + sz) & ~(sz-1); - return (__boundary - 1 < end - 1) ? __boundary : end; -} - -static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, - unsigned long end, unsigned int flags, - struct page **pages, int *nr) -{ - unsigned long pte_end; - struct page *page; - struct folio *folio; - pte_t pte; - int refs; - - pte_end = (addr + sz) & ~(sz-1); - if (pte_end < end) - end = pte_end; - - pte = huge_ptep_get(ptep); - - if (!pte_access_permitted(pte, flags & FOLL_WRITE)) - return 0; - - /* hugepages are never "special" */ - VM_BUG_ON(!pfn_valid(pte_pfn(pte))); - - page = pte_page(pte); - refs = record_subpages(page, sz, addr, end, pages + *nr); - - folio = try_grab_folio(page, refs, flags); - if (!folio) - return 0; - - if (unlikely(pte_val(pte) != pte_val(ptep_get(ptep)))) { - gup_put_folio(folio, refs, flags); - return 0; - } - - if (!pte_write(pte) && gup_must_unshare(NULL, flags, &folio->page)) { - gup_put_folio(folio, refs, flags); - return 0; - } - - *nr += refs; - folio_set_referenced(folio); - return 1; -} - -/* - * NOTE: currently GUP for a hugepd is only possible on hugetlbfs file - * systems on Power, which does not have issue with folio writeback against - * GUP updates. When hugepd will be extended to support non-hugetlbfs or - * even anonymous memory, we need to do extra check as what we do with most - * of the other folios. See writable_file_mapping_allowed() and - * folio_fast_pin_allowed() for more information. - */ -static int gup_huge_pd(hugepd_t hugepd, unsigned long addr, - unsigned int pdshift, unsigned long end, unsigned int flags, - struct page **pages, int *nr) -{ - pte_t *ptep; - unsigned long sz = 1UL << hugepd_shift(hugepd); - unsigned long next; - - ptep = hugepte_offset(hugepd, addr, pdshift); - do { - next = hugepte_addr_end(addr, end, sz); - if (!gup_hugepte(ptep, sz, addr, end, flags, pages, nr)) - return 0; - } while (ptep++, addr = next, addr != end); - - return 1; -} -#else -static inline int gup_huge_pd(hugepd_t hugepd, unsigned long addr, - unsigned int pdshift, unsigned long end, unsigned int flags, - struct page **pages, int *nr) -{ - return 0; -} -#endif /* CONFIG_ARCH_HAS_HUGEPD */ - static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, unsigned long end, unsigned int flags, struct page **pages, int *nr) From patchwork Wed Mar 27 15:23:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13606835 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 268FBC47DD9 for ; Wed, 27 Mar 2024 15:27:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=JRs4CIyjEzVmXzSmv+rKKrcSGy3xwBNZFdTSOVwN5j8=; b=FHNQpy/EuWOKN+ JMMu5m81wL0kjEEgeBSGmScX4x7oX6Dw2VoGta7qFWFxfJKEdDq3cHUxNF4MmWx2YsW/IQVivwLHs zssyvUhZsPn5harw/HHAHe5tiLBvqn1sXoYM0ISkh1WBR/p3iiASuXV4gaaPy0aH9bGdt67u+MZNp lAAbzVoAuyhyhj/xDPhlp/GmtbG7nOOwVMwgKnDmDSemS+i3nIL0tIoZEtxAPdMVXhLH87sbXNwPN W8bUWuBnpt9BbkpBblQNaB2BFy4l0PV5xPKQUQPTJhg6pziTUnAYKlKMgdIMg93UCPj1lIU3K1bcZ oUCw/lETkP/73Dl0Z6cA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpVBM-00000009nSr-3q8w; Wed, 27 Mar 2024 15:27:16 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV8U-00000009liJ-1WdV for linux-arm-kernel@bombadil.infradead.org; Wed, 27 Mar 2024 15:24:18 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender :Reply-To:Content-ID:Content-Description; bh=eqT6FBXOR6azzagmtpaiz+BS8+Dy8n2ikly9Wvyy/LU=; b=JYVSX2jOfensPHcULHzl5BFEkW PTFKRJgm/v5umAmQuhn+exrBAtqUiEDLQasQ1jkEfIlBTZH7/oYFRrsOjxJtc94FfQvy5fliiOlCq h56TDiFhw2Ohy2EzzLUs5KHvCaytbAHqRBrmZtpKKaRAOPXiqT4bj4sttRBStnztKnporhIAyYc7h 7dpcantUUHIbI5F69mBXRC2PKEIjrfYFFy/pI1EIkRSvCb1qby+crcpDS5TZ0P+r1DP9hWF+FAcVt 94RRMZ+4yMUNn/3nOxN7yorf/VTQZZEcXHPHPPo7KvE+Ujxbbjx+LU3CGqQg9qah+Y5nIJrGN3O/k wrSZPpVQ==; Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by casper.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV8Q-000000048lg-26I2 for linux-arm-kernel@lists.infradead.org; Wed, 27 Mar 2024 15:24:17 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553050; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eqT6FBXOR6azzagmtpaiz+BS8+Dy8n2ikly9Wvyy/LU=; b=iYDQB5JDqZm5echdPEKgEe3SsUmfnQ3JIm7+R9jp3DqDQaUj87pZSvqlFn+c17bJKqpkq6 /hFRdLBS7SoUdkw8Woqw4xFrmG0HvzEKt+o3InqGVfO3AJwK2RlKRg1qUOOiLgIo4/dG7a XDAw/V2NGFC8X+9LGI9J6Ofk3akEv6A= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553050; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eqT6FBXOR6azzagmtpaiz+BS8+Dy8n2ikly9Wvyy/LU=; b=iYDQB5JDqZm5echdPEKgEe3SsUmfnQ3JIm7+R9jp3DqDQaUj87pZSvqlFn+c17bJKqpkq6 /hFRdLBS7SoUdkw8Woqw4xFrmG0HvzEKt+o3InqGVfO3AJwK2RlKRg1qUOOiLgIo4/dG7a XDAw/V2NGFC8X+9LGI9J6Ofk3akEv6A= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553050; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eqT6FBXOR6azzagmtpaiz+BS8+Dy8n2ikly9Wvyy/LU=; b=iYDQB5JDqZm5echdPEKgEe3SsUmfnQ3JIm7+R9jp3DqDQaUj87pZSvqlFn+c17bJKqpkq6 /hFRdLBS7SoUdkw8Woqw4xFrmG0HvzEKt+o3InqGVfO3AJwK2RlKRg1qUOOiLgIo4/dG7a XDAw/V2NGFC8X+9LGI9J6Ofk3akEv6A= Received: from mail-ot1-f72.google.com (mail-ot1-f72.google.com [209.85.210.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-663-w8JE5i24M1-hM5ZWyOEBTg-1; Wed, 27 Mar 2024 11:24:09 -0400 X-MC-Unique: w8JE5i24M1-hM5ZWyOEBTg-1 Received: by mail-ot1-f72.google.com with SMTP id 46e09a7af769-6dde25ac92fso3182016a34.0 for ; Wed, 27 Mar 2024 08:24:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711553043; x=1712157843; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eqT6FBXOR6azzagmtpaiz+BS8+Dy8n2ikly9Wvyy/LU=; b=oCMyPSfdapc6Q+RKU//QjxMAj2J2ncce6u0x/11Dn0eUZXcqqs6d7+Zc17oZktPAIt P7jWEb0FXRQbECwe8Y6YoIPJKn/qWFIkrP2YnmdcSQgKl1sdpm565ueUsCDAX8K38SRY +wKn+ROV7CpYJxOlALAl3n+Oc/hj1TH/TlOgL5lJm0mJF5kNYnpKp1LuXB/SZncBAa6T dBEcxLwNZI1qBxKs3VH9fNWGl2PKnlqHa56mAVEzpm8i/FQdRPQ3v8Qlbb7MB/OsVRP3 jTx69HpPW6W/vARnkPrg65MHZ4wpW3Cefruk/lfd9xJvEdO/erGVuyWdpEQv4FZjo9/F e8Ow== X-Forwarded-Encrypted: i=1; AJvYcCV4KQfwGs1S+2wHXU+6iFXgzaGzFr4/0cU4j+Ngq+dRgcLXA3JvyQUik02P6hgogkFDIP7jOWGmqFepdlTR67uS5/NC8dEyht+qNWa43mp2GYvyFi0= X-Gm-Message-State: AOJu0YwuHFIPa/A9iejAgWkzZlAcXGxs6BaJgyUOaGklw/nABu9dOdC1 LCpX1IUSEyBuw90b2EpxFLSHxyG+WgI7Y75HKTydEc1uQT2qKzOVn2vSmgysdtIdYCBZWXMLNrV 9WFUe1d9mB7dEnRetXQRZLNOfQVVEJRaMx9Q0jzNP0tNDLhmhQYF/u7MOpbPfhRmo+r8qzthu X-Received: by 2002:a05:6808:1381:b0:3c3:d815:b670 with SMTP id c1-20020a056808138100b003c3d815b670mr278651oiw.2.1711553043443; Wed, 27 Mar 2024 08:24:03 -0700 (PDT) X-Google-Smtp-Source: AGHT+IF2f7pSTtNK/4lDxloQcNJtvLkvUR42Ph4fHUFnW1nCtEkqTm3cUEoT3r9uMsOerfq/WsXhuA== X-Received: by 2002:a05:6808:1381:b0:3c3:d815:b670 with SMTP id c1-20020a056808138100b003c3d815b670mr278609oiw.2.1711553042809; Wed, 27 Mar 2024 08:24:02 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id hu4-20020a056214234400b00690dd47a41csm6412639qvb.86.2024.03.27.08.24.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Mar 2024 08:24:02 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Yang Shi , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Michael Ellerman , peterx@redhat.com, Andrew Jones , Muchun Song , linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Christophe Leroy , Andrew Morton , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox , Rik van Riel , linux-arm-kernel@lists.infradead.org, Andrea Arcangeli , David Hildenbrand , "Aneesh Kumar K . V" , Vlastimil Babka , James Houghton , Jason Gunthorpe , Mike Rapoport , Axel Rasmussen Subject: [PATCH v4 13/13] mm/gup: Handle hugetlb in the generic follow_page_mask code Date: Wed, 27 Mar 2024 11:23:32 -0400 Message-ID: <20240327152332.950956-14-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240327152332.950956-1-peterx@redhat.com> References: <20240327152332.950956-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240327_152414_731903_B8D89D49 X-CRM114-Status: GOOD ( 22.24 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Peter Xu Now follow_page() is ready to handle hugetlb pages in whatever form, and over all architectures. Switch to the generic code path. Time to retire hugetlb_follow_page_mask(), following the previous retirement of follow_hugetlb_page() in 4849807114b8. There may be a slight difference of how the loops run when processing slow GUP over a large hugetlb range on cont_pte/cont_pmd supported archs: each loop of __get_user_pages() will resolve one pgtable entry with the patch applied, rather than relying on the size of hugetlb hstate, the latter may cover multiple entries in one loop. A quick performance test on an aarch64 VM on M1 chip shows 15% degrade over a tight loop of slow gup after the path switched. That shouldn't be a problem because slow-gup should not be a hot path for GUP in general: when page is commonly present, fast-gup will already succeed, while when the page is indeed missing and require a follow up page fault, the slow gup degrade will probably buried in the fault paths anyway. It also explains why slow gup for THP used to be very slow before 57edfcfd3419 ("mm/gup: accelerate thp gup even for "pages != NULL"") lands, the latter not part of a performance analysis but a side benefit. If the performance will be a concern, we can consider handle CONT_PTE in follow_page(). Before that is justified to be necessary, keep everything clean and simple. Reviewed-by: Jason Gunthorpe Signed-off-by: Peter Xu Signed-off-by: Peter Xu --- include/linux/hugetlb.h | 7 ---- mm/gup.c | 15 +++------ mm/hugetlb.c | 71 ----------------------------------------- 3 files changed, 5 insertions(+), 88 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 294c78b3549f..a546140f89cd 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -328,13 +328,6 @@ static inline void hugetlb_zap_end( { } -static inline struct page *hugetlb_follow_page_mask( - struct vm_area_struct *vma, unsigned long address, unsigned int flags, - unsigned int *page_mask) -{ - BUILD_BUG(); /* should never be compiled in if !CONFIG_HUGETLB_PAGE*/ -} - static inline int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, struct vm_area_struct *dst_vma, diff --git a/mm/gup.c b/mm/gup.c index a02463c9420e..c803d0b0f358 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1135,18 +1135,11 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, { pgd_t *pgd; struct mm_struct *mm = vma->vm_mm; + struct page *page; - ctx->page_mask = 0; - - /* - * Call hugetlb_follow_page_mask for hugetlb vmas as it will use - * special hugetlb page table walking code. This eliminates the - * need to check for hugetlb entries in the general walking code. - */ - if (is_vm_hugetlb_page(vma)) - return hugetlb_follow_page_mask(vma, address, flags, - &ctx->page_mask); + vma_pgtable_walk_begin(vma); + ctx->page_mask = 0; pgd = pgd_offset(mm, address); if (unlikely(is_hugepd(__hugepd(pgd_val(*pgd))))) @@ -1157,6 +1150,8 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, else page = follow_p4d_mask(vma, address, pgd, flags, ctx); + vma_pgtable_walk_end(vma); + return page; } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 65b9c9a48fd2..cc79891a3597 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6870,77 +6870,6 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, } #endif /* CONFIG_USERFAULTFD */ -struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, - unsigned long address, unsigned int flags, - unsigned int *page_mask) -{ - struct hstate *h = hstate_vma(vma); - struct mm_struct *mm = vma->vm_mm; - unsigned long haddr = address & huge_page_mask(h); - struct page *page = NULL; - spinlock_t *ptl; - pte_t *pte, entry; - int ret; - - hugetlb_vma_lock_read(vma); - pte = hugetlb_walk(vma, haddr, huge_page_size(h)); - if (!pte) - goto out_unlock; - - ptl = huge_pte_lock(h, mm, pte); - entry = huge_ptep_get(pte); - if (pte_present(entry)) { - page = pte_page(entry); - - if (!huge_pte_write(entry)) { - if (flags & FOLL_WRITE) { - page = NULL; - goto out; - } - - if (gup_must_unshare(vma, flags, page)) { - /* Tell the caller to do unsharing */ - page = ERR_PTR(-EMLINK); - goto out; - } - } - - page = nth_page(page, ((address & ~huge_page_mask(h)) >> PAGE_SHIFT)); - - /* - * Note that page may be a sub-page, and with vmemmap - * optimizations the page struct may be read only. - * try_grab_page() will increase the ref count on the - * head page, so this will be OK. - * - * try_grab_page() should always be able to get the page here, - * because we hold the ptl lock and have verified pte_present(). - */ - ret = try_grab_page(page, flags); - - if (WARN_ON_ONCE(ret)) { - page = ERR_PTR(ret); - goto out; - } - - *page_mask = (1U << huge_page_order(h)) - 1; - } -out: - spin_unlock(ptl); -out_unlock: - hugetlb_vma_unlock_read(vma); - - /* - * Fixup retval for dump requests: if pagecache doesn't exist, - * don't try to allocate a new page but just skip it. - */ - if (!page && (flags & FOLL_DUMP) && - !hugetlbfs_pagecache_present(h, vma, address)) - page = ERR_PTR(-EFAULT); - - return page; -} - long hugetlb_change_protection(struct vm_area_struct *vma, unsigned long address, unsigned long end, pgprot_t newprot, unsigned long cp_flags)