From patchwork Wed Mar 27 15:23:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13607070 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D42E6C47DD9 for ; Wed, 27 Mar 2024 16:33:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2pXNUEigY5EVQ0GMfUUCeStwsXw//3pd6DKGjVzSM1Y=; b=ULqGuCeAjP6jH+ EQmwfTKLrFnJmU9NM+leqW9W8pZscfV70NWxLHRJ9DYwXRvojuuh1fE+uXIs9i9Orn3gdFbWm/4NA FNJyISZen7sd+y8LW88WzB4wlTdEJkc4UJf3gMgd14GpKYOsf5KG6HnW1U0TmlDMDzRyJLYtEnFdz OV1d7M5hm8lVfnh6nL2P2JuuNqGr7AWH5PHoLev9oOhNvjyy1MSVNbKGMa12TMIt8SdeHevqrNIsO Odmv/0xA/s5+bXkn5XGJubXs5LJr628gymX5iuNRNwrZJrr5LnHzwtdHB9yTv9Y4VYqnb7y5ewvTC FE+RCFfZA8c0sZkHzbVw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpWDG-0000000A6MT-48pi; Wed, 27 Mar 2024 16:33:18 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV7t-00000009lBO-0NGP for linux-riscv@lists.infradead.org; Wed, 27 Mar 2024 15:23:43 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553020; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=bzTkuuEUZ2f7C9YouF2ORWZwnTLSeJ8Y5AJhy7kkISQ=; b=HERAnwdQFeymCFyZMJ5sMV/5gQjSvaPy/RdTHP7LaZE/FfItyTkEm2+wEyl/4vAdJOfMng eu2EkyMS/bU/fvZ1Eaado6rPk6CA90KZAqQ8PLDg3KzMJl9PcK5mqPk/YNRGtu7oxM4HmW wIjMW9VuObrzex/ogw08x4jN7sMWhTI= Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-353-XJOn17scOeeSA-M-j7KkiA-1; Wed, 27 Mar 2024 11:23:38 -0400 X-MC-Unique: XJOn17scOeeSA-M-j7KkiA-1 Received: by mail-qv1-f70.google.com with SMTP id 6a1803df08f44-696a9c482a4so3595376d6.0 for ; Wed, 27 Mar 2024 08:23:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711553017; x=1712157817; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bzTkuuEUZ2f7C9YouF2ORWZwnTLSeJ8Y5AJhy7kkISQ=; b=a6IbiiBLrTyKHBzsImhuWsDc6WkCTicKXfySfxOch7MubrGPp1smP4cT66IhItR3tX 2yNNEarBp0XCxi44rGwVrUZ4pK0N0VtTcRDozAySSNG//HxY/tN3SK4blqMGHMtRPm9/ uJM9Ge/j3NlPXsUWsotSX7kjaYN2JT0XUSTtGAMiZX0J1Pgf4DxGUwA91xDq2LbfrMBN vByDlShMfP7Lc1iFgBYYNbIRsbLF2hXKMLXFeN6jK1NWibpz9dhZoItKloPqYdtIDnbQ qX4UbZfgnuHb6vHVbWyKR0WHajN+FPYHFj2cwh1AENIO6bQ+T6ldWHbqox9eU47Urjpj tqSw== X-Forwarded-Encrypted: i=1; AJvYcCUUjRCWzDGsAc6dhiHYxSYifjNA0FsdQR8y+q3pNtiQ6EuiGCEOuB6KnQUcLkRex/vRQUdmPUyGvQMNSjkGd04/QssBXJVbGyjXXSeldk7h X-Gm-Message-State: AOJu0YwAVV0Pno5G78nd9y8kX6VpRWSzNvwShixfHZL18eYp4ss5PCvM hH8oRVWAzKV1b3SNhMIMd6jk2iWa3IOMTZKxAeBD5pRlAM6cbbJXwx5xcm7Flqa6mDk8fg9KxLj poFy58P+HUwzkOjvqtAIJie8C3uEIixQ8CvsNKdSYKlqg36wtnZqyZUkmRoTqhBrkSQ== X-Received: by 2002:a05:6214:5d11:b0:690:3c85:c5b with SMTP id me17-20020a0562145d1100b006903c850c5bmr14918711qvb.3.1711553017548; Wed, 27 Mar 2024 08:23:37 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFplzIHsd2VycAjj5oUNWAa5qW9RFPqJwKtRaT5aqzv2S9fkhcwA1AKc/2h1PXS7yh/3x/6aw== X-Received: by 2002:a05:6214:5d11:b0:690:3c85:c5b with SMTP id me17-20020a0562145d1100b006903c850c5bmr14918682qvb.3.1711553016992; Wed, 27 Mar 2024 08:23:36 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id hu4-20020a056214234400b00690dd47a41csm6412639qvb.86.2024.03.27.08.23.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Mar 2024 08:23:36 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Yang Shi , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Michael Ellerman , peterx@redhat.com, Andrew Jones , Muchun Song , linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Christophe Leroy , Andrew Morton , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox , Rik van Riel , linux-arm-kernel@lists.infradead.org, Andrea Arcangeli , David Hildenbrand , "Aneesh Kumar K . V" , Vlastimil Babka , James Houghton , Jason Gunthorpe , Mike Rapoport , Axel Rasmussen Subject: [PATCH v4 01/13] mm/Kconfig: CONFIG_PGTABLE_HAS_HUGE_LEAVES Date: Wed, 27 Mar 2024 11:23:20 -0400 Message-ID: <20240327152332.950956-2-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240327152332.950956-1-peterx@redhat.com> References: <20240327152332.950956-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240327_082341_448737_A7D954B6 X-CRM114-Status: UNSURE ( 9.20 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Peter Xu Introduce a config option that will be selected as long as huge leaves are involved in pgtable (thp or hugetlbfs). It would be useful to mark any code with this new config that can process either hugetlb or thp pages in any level that is higher than pte level. Reviewed-by: Jason Gunthorpe Signed-off-by: Peter Xu --- mm/Kconfig | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/mm/Kconfig b/mm/Kconfig index b924f4a5a3ef..497cdf4d8ebf 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -850,6 +850,12 @@ config READ_ONLY_THP_FOR_FS endif # TRANSPARENT_HUGEPAGE +# +# The architecture supports pgtable leaves that is larger than PAGE_SIZE +# +config PGTABLE_HAS_HUGE_LEAVES + def_bool TRANSPARENT_HUGEPAGE || HUGETLB_PAGE + # # UP and nommu archs use km based percpu allocator # From patchwork Wed Mar 27 15:23:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13606839 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8EB04C54E67 for ; Wed, 27 Mar 2024 15:23:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=32lMYT/tj5oFzJE3dUzyGVyfHPm3/WlySNZXiq6Q4W0=; b=CqLMQgHhnrKoQh +zEmaGJitd6fs5zW3UIsY04oZpUSKbdcV24mw3P9mgQtI77d82QZzkGnuuFuSPjGkbH+jwVPpIsNY +DwsnZb97ykrdtqv5BLlTHiD80S5XUCUmCwN9Kg9WN6MsEFoiMS2HaMTikEGXe+3TueXQAproAg9e xxcKCSVq396tPVrhOdvGjM0bR8DoKJG8ANfzrfWnIX1r74B6+dfQRPKyoF7E859E1ZGEwwS4wLk0H Sx9xwvVgYJfRpixO+SV1yUrl0S9cB5ALeQ8UZng8pmdInJbbNN6EM1Snf9dMPLGLx3Pa36NhEZtiB A1HoeihGLjB62JjDPLJg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV86-00000009lPK-0t2c; Wed, 27 Mar 2024 15:23:54 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV7x-00000009lFC-0QSO for linux-riscv@lists.infradead.org; Wed, 27 Mar 2024 15:23:47 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553024; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/DKy2rskdC2Yvy4b+7pXCefd1yMSUUO/lg/gxYCRDGA=; b=HkL6HdFudmHPYI0SRUc12MVWwkkzPc3uYl4T7jXedqkDXA+b0Kq+UBVj670S/In+fugDhj nhtF02jGFeQb4+eTsM98bZjgZlI5QQvdna94QDvkKB0qvRcCGV7mBmB80e7li0vUQx2d08 ldrt45S3QBPR0JQoWklJMQjsd8YWKdg= Received: from mail-oi1-f199.google.com (mail-oi1-f199.google.com [209.85.167.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-225-cYDEYGHlOT6TxBi0oMLYBw-1; Wed, 27 Mar 2024 11:23:41 -0400 X-MC-Unique: cYDEYGHlOT6TxBi0oMLYBw-1 Received: by mail-oi1-f199.google.com with SMTP id 5614622812f47-3c22fa01f9dso321880b6e.0 for ; Wed, 27 Mar 2024 08:23:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711553020; x=1712157820; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/DKy2rskdC2Yvy4b+7pXCefd1yMSUUO/lg/gxYCRDGA=; b=LyTtlBgGFCZo4LzPtBX5SfGsAbPbG/Y5EQn339LqDzPHeVadKGGujyD1pttu6gVdii PduYmV5LhhFK01fO5sNatJHNmNpAsTUiDw5dkERjgnPvtjOY6xDxVaxrxx+NfRcf2jkD d66gevde9I6rK8Y2Eb+7mcLdvzG7uS+jlMYuyzyGeUHxCijkKD6Mf4yRBcmMS7F/sS2w wgYZ4hZP9SXJUdy5TMlMb9+ZBLU7MZVxuw5rvoV8CRkg53pZ0x8vA25DNSJIA3vAcBxJ 1gwyVLPJJAx+SPJy26vDfkgYw8WLEElRnWOiGcIYLs+1fgzTA5OY/822OcvyKwqhK5Nq 7piA== X-Forwarded-Encrypted: i=1; AJvYcCWa54ti9Y6W9jrh6P2douti/jKidD6E795+CplFEWnZARflTy4L56L6svN2wuwv7yigxAaL+kjRkNGHMJz7+Pa9GLnMxDZrQZw+4pDIDqjK X-Gm-Message-State: AOJu0YzyZJMJ0UOjwx3t4m9vqclKBnnH0YUP2EluseJrMxAzjbgHME5v 35Iu1iZcOa4WcF2oSTHdZ9zwQiq1rdKhq3Y1k4XCNRriZaDTF+jiEHuAQ+YH8LYJByrpO+1gAbp wTzScf/MAq92Pr3HiMkCTrFGifWoaQsr9XALyaFRmuX22YAegl+7DeOCMNbFIhlGkWg== X-Received: by 2002:a05:6808:1814:b0:3c3:d28a:b1a8 with SMTP id bh20-20020a056808181400b003c3d28ab1a8mr370353oib.0.1711553020024; Wed, 27 Mar 2024 08:23:40 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEUSCRMAKzeIW51sHYtQ4kGfBJ1LCcy7xm5bPm4kr5W7dAQlFQDGhJMDaaF/OcaFnro9iqkpQ== X-Received: by 2002:a05:6808:1814:b0:3c3:d28a:b1a8 with SMTP id bh20-20020a056808181400b003c3d28ab1a8mr370311oib.0.1711553019443; Wed, 27 Mar 2024 08:23:39 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id hu4-20020a056214234400b00690dd47a41csm6412639qvb.86.2024.03.27.08.23.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Mar 2024 08:23:38 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Yang Shi , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Michael Ellerman , peterx@redhat.com, Andrew Jones , Muchun Song , linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Christophe Leroy , Andrew Morton , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox , Rik van Riel , linux-arm-kernel@lists.infradead.org, Andrea Arcangeli , David Hildenbrand , "Aneesh Kumar K . V" , Vlastimil Babka , James Houghton , Jason Gunthorpe , Mike Rapoport , Axel Rasmussen Subject: [PATCH v4 02/13] mm/hugetlb: Declare hugetlbfs_pagecache_present() non-static Date: Wed, 27 Mar 2024 11:23:21 -0400 Message-ID: <20240327152332.950956-3-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240327152332.950956-1-peterx@redhat.com> References: <20240327152332.950956-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240327_082345_382841_E6E4C11C X-CRM114-Status: GOOD ( 11.93 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Peter Xu It will be used outside hugetlb.c soon. Signed-off-by: Peter Xu --- include/linux/hugetlb.h | 9 +++++++++ mm/hugetlb.c | 4 ++-- 2 files changed, 11 insertions(+), 2 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index d748628efc5e..294c78b3549f 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -174,6 +174,9 @@ u32 hugetlb_fault_mutex_hash(struct address_space *mapping, pgoff_t idx); pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, pud_t *pud); +bool hugetlbfs_pagecache_present(struct hstate *h, + struct vm_area_struct *vma, + unsigned long address); struct address_space *hugetlb_page_mapping_lock_write(struct page *hpage); @@ -1228,6 +1231,12 @@ static inline void hugetlb_register_node(struct node *node) static inline void hugetlb_unregister_node(struct node *node) { } + +static inline bool hugetlbfs_pagecache_present( + struct hstate *h, struct vm_area_struct *vma, unsigned long address) +{ + return false; +} #endif /* CONFIG_HUGETLB_PAGE */ static inline spinlock_t *huge_pte_lock(struct hstate *h, diff --git a/mm/hugetlb.c b/mm/hugetlb.c index f9640a81226e..65b9c9a48fd2 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6110,8 +6110,8 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, /* * Return whether there is a pagecache page to back given address within VMA. */ -static bool hugetlbfs_pagecache_present(struct hstate *h, - struct vm_area_struct *vma, unsigned long address) +bool hugetlbfs_pagecache_present(struct hstate *h, + struct vm_area_struct *vma, unsigned long address) { struct address_space *mapping = vma->vm_file->f_mapping; pgoff_t idx = linear_page_index(vma, address); From patchwork Wed Mar 27 15:23:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13606838 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3CFA7CD1280 for ; Wed, 27 Mar 2024 15:23:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=kAvtCBq7FnzsJpxaasNC/NEBsPpyeIqmt2wgHaAZyrw=; b=WXHrIDFFsH8Q5I KjXRcmTweLeCOsNhlhWDH6qzVw95noBakBusMzWV/LyYvv95lUjn04v2XAcldM/RiHO+AB81ImQhN zrdkD3heoWHTJ17bJnAau2ItXq38/mn3W9qMG66gPSZc8JiIm1KfevfjJeqc+GEesLTLoVC84uqmu 8Hdu3EqVy8i0TK8Ig0saoHNpCt044FNgAwnyRr2aubcfVR6H8JHhSrEMrqANeSCDIUfgIkvhyupP7 uvlDZwnlxbhs/FHKUgWwYx0Q85KvH9qAyzjKCtXQIJn40wAx6PTtpTkICB5+VUDrqvCyHjsn7ABHl Figh72ILTH7J8lFK3AKw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV84-00000009lMb-0LuT; Wed, 27 Mar 2024 15:23:52 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV7x-00000009lFD-0d8R for linux-riscv@lists.infradead.org; Wed, 27 Mar 2024 15:23:47 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553024; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/g69WjpU0rXeMHNQq0wHJ20ikWbVhFRACidICzEeTCE=; b=bkvBedbze8RVNVsEW+6xD5cM6TwuRhHYlCAAPcieiEM5mVWDPZ/MUSSoZ8et5+FudChwbr ll8o6EhgzahcYrgnrDilfqGB8eYJn6ogSYlPOLu6QF06a32wzXjE5AWzt5YD7LGKMXIRWl rFOHvUuSH01eEIpR8piDboLY6uhQGGs= Received: from mail-qv1-f72.google.com (mail-qv1-f72.google.com [209.85.219.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-652-OJjPV9kzOn6hy-Jb7_BvCg-1; Wed, 27 Mar 2024 11:23:42 -0400 X-MC-Unique: OJjPV9kzOn6hy-Jb7_BvCg-1 Received: by mail-qv1-f72.google.com with SMTP id 6a1803df08f44-696a5972507so4145696d6.0 for ; Wed, 27 Mar 2024 08:23:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711553022; x=1712157822; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/g69WjpU0rXeMHNQq0wHJ20ikWbVhFRACidICzEeTCE=; b=XoPsAGOyFdynO37rJnzJ9MVfKDtROa6nfLPKpz5hX+X4aTxxvIid++yBYZnWUt5cZo PhO18h/VxP1l25nP28Vmsa2DA58P+RjQ6hKeh+rRtGKWxRd0Z7KFkcODP6YZpnWLb02v ELG3tTbU2NIEOohiGMuh0J0ZIa8yy8jS4RVk4sDByrdxuRrvAn+7Jin8s90d3hGAE0uh Nrq/sh/+0a5bA040G6KxL19nN0rVcCpNP08WpCQJDLOi2vL0OHX8OwLFCYjBTAj0zJOp 36PTqqKpwZ2Dc5te33q/9+4kJ3E+1CJVvHJ2H9sS3uXBqzO7kVuLM3f/qct9YM0ZcIm6 pTGA== X-Forwarded-Encrypted: i=1; AJvYcCWjej2mqdeeU9HZ1A6gVDmaVBFZMxqiZ8GOWC9kLzjs4ghKhxge5G9xpyBeVFC9BkBHLpHTrlp2y1LxMpF38sdvqpTSn67I696MfCSzIQjj X-Gm-Message-State: AOJu0YzzEDD2MUOZa9R4tUV9+GwHYcCf2fsn8WxDLHG7jgKH/keF5DNS anV8TpYQzKb9FQyPDvVLKrDfHdLRHAISrHDgpwWzwHarYpNSWhgxUdK6Ohu8XWRb+KYuYqTn081 rE68Pm+PtqNB+4+p+UqU3tOeUrozVYMpQtVnCp8izADbl1FPUgm5AE9/3oWO/Xerv9g== X-Received: by 2002:a05:6214:3187:b0:691:456f:415a with SMTP id lb7-20020a056214318700b00691456f415amr14772454qvb.4.1711553022242; Wed, 27 Mar 2024 08:23:42 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFwFABTpt0oYC5XP41rgqaavJwjTuxekGLogedD16Vy17R/DxfwS2XqF10qrZ+lSq8ZmFvRzQ== X-Received: by 2002:a05:6214:3187:b0:691:456f:415a with SMTP id lb7-20020a056214318700b00691456f415amr14772425qvb.4.1711553021719; Wed, 27 Mar 2024 08:23:41 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id hu4-20020a056214234400b00690dd47a41csm6412639qvb.86.2024.03.27.08.23.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Mar 2024 08:23:40 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Yang Shi , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Michael Ellerman , peterx@redhat.com, Andrew Jones , Muchun Song , linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Christophe Leroy , Andrew Morton , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox , Rik van Riel , linux-arm-kernel@lists.infradead.org, Andrea Arcangeli , David Hildenbrand , "Aneesh Kumar K . V" , Vlastimil Babka , James Houghton , Jason Gunthorpe , Mike Rapoport , Axel Rasmussen Subject: [PATCH v4 03/13] mm: Make HPAGE_PXD_* macros even if !THP Date: Wed, 27 Mar 2024 11:23:22 -0400 Message-ID: <20240327152332.950956-4-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240327152332.950956-1-peterx@redhat.com> References: <20240327152332.950956-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240327_082345_374127_74AC2258 X-CRM114-Status: GOOD ( 11.85 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Peter Xu These macros can be helpful when we plan to merge hugetlb code into generic code. Move them out and define them as long as PGTABLE_HAS_HUGE_LEAVES is selected, because there are systems that only define HUGETLB_PAGE not THP. One note here is HPAGE_PMD_SHIFT must be defined even if PMD_SHIFT is not defined (e.g. !CONFIG_MMU case); it (or in other forms, like HPAGE_PMD_NR) is already used in lots of common codes without ifdef guards. Use the old trick to let complations work. Here we only need to differenciate HPAGE_PXD_SHIFT definitions. All the rest macros will be defined based on it. When at it, move HPAGE_PMD_NR / HPAGE_PMD_ORDER over together. Signed-off-by: Peter Xu --- include/linux/huge_mm.h | 29 +++++++++++++++-------------- 1 file changed, 15 insertions(+), 14 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 7576025db55d..d3bb25c39482 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -64,9 +64,6 @@ ssize_t single_hugepage_flag_show(struct kobject *kobj, enum transparent_hugepage_flag flag); extern struct kobj_attribute shmem_enabled_attr; -#define HPAGE_PMD_ORDER (HPAGE_PMD_SHIFT-PAGE_SHIFT) -#define HPAGE_PMD_NR (1< X-Patchwork-Id: 13606840 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B2B2ECD1280 for ; Wed, 27 Mar 2024 15:24:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ROzow6XucFkLMSr0dX/P9txY60787ucuVBPup2hF7uw=; b=sFvyigR2X7WWTN EUmcU5zQgt5XR17fDnbc+FA3ETc4x2AVsQMXR7Ul2icHOwLyy76+WkoRkZlo+ZDezvYxjOXc/Vwhq QK/K4lLcgGAoAGdRL1IbmERfNGWToMFsuJeKyicpkY9gmdBMQBPc4Ybs8K579ZE46Kkx5AGCFlnFg 6IMdyqypN9c/0tnXTtcs8Ds8kEc+9lvYkHbCvIv1JS3nQT6AD5AE6tnHQtzKVla0B7GXATHQY1B9Y /NviABt1w9GKjoiDnnFiG0gzGniutBtFnluiGadUTGrfSYWarBuGn1uuqkiN8ItSKmO8bZH6iE/rJ eSaoQKXBuAt9qpspuPRQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV8S-00000009lhR-2iMl; Wed, 27 Mar 2024 15:24:16 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV84-00000009lMa-1ram for linux-riscv@bombadil.infradead.org; Wed, 27 Mar 2024 15:23:52 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Content-Transfer-Encoding :MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Sender:Reply-To:Content-ID:Content-Description; bh=iN7hVKxpU9yGNoa0FvV6QQfaV15Sbc5QZKf/pqr08oc=; b=rDQo7ngeFL6dXnYlJ/hTQ6s7Sw Pjcy8ufQkpzoKe53+t6qqhmL2GmR+NebLU4bq/AIcq/lMmrIvgRusEFyg+JZtStCb1m9pbIzkbHra BQfywIhxnFLn+Cue9Iud19DwtFkq2esNmcOfAk5APzQ4NaFjt2xEC5Wj3oQGLCA/PRREyWPu41IpH gAYtnzBuuMDF1BuWzWMfEFuzyk5EArBh/My2OqlSDcclpBvtgEhBXll9RWKFpI/LrQG4MC+jIVoOY wiW+8cuKuahhUyKHDMCXYEIVVHE3HjWu2avXR3ZeOxLLH/mGfi4sw9gEKrvwM64qd4EkZ0asfJH4V gfIsPr6A==; Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by desiato.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV80-00000000KzI-0X7U for linux-riscv@lists.infradead.org; Wed, 27 Mar 2024 15:23:51 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553026; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iN7hVKxpU9yGNoa0FvV6QQfaV15Sbc5QZKf/pqr08oc=; b=E0m8c0g4mMcVpRGlZq3lof8PUsb6a7CeiZTHQhuS5Deitn7B3/6mUZJkHOYFCUK5u+VK8+ gNN6+/+fKJNfKjNqYJK5MxZmuIg9GV3UiDZOtn60DpbXqKNYof4bF4TwhtAtI1wpHO5vdF hbduVgMnvApSArxYYTzzbQDAGnIHV8k= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553027; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iN7hVKxpU9yGNoa0FvV6QQfaV15Sbc5QZKf/pqr08oc=; b=YoUFMiqFRQwhob84m5cE7rl4ymoqHhlX4EDmjLX6XYrePEOIIbfmCh/MAOwdaYeQHBYR+R zahoAgoWPVh3bqDzBgpodsIDqI1r7v4l2TP9JFH8fqbyuDel/wgMjx+Yz4rmFKVIbWqbHw hXtPSMsy4uouiL7XlpLU7jvFHIs5uZE= Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-507-uFmadyJ3Ne2S53oZu8wyjQ-1; Wed, 27 Mar 2024 11:23:45 -0400 X-MC-Unique: uFmadyJ3Ne2S53oZu8wyjQ-1 Received: by mail-qv1-f69.google.com with SMTP id 6a1803df08f44-69152af7760so25772506d6.1 for ; Wed, 27 Mar 2024 08:23:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711553025; x=1712157825; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=iN7hVKxpU9yGNoa0FvV6QQfaV15Sbc5QZKf/pqr08oc=; b=tavU+Z/waR9t7o/ZO8sBKGFbzJGNTI1lJoKee/qYX16WpZU7lMNegW7tGbUu01FQMP Mhb3flCXYj09YO+19m0hlPuFdKdvK1kaZbAmeLrZBcNgOul0m6k3jMEQ3AG53EqlctPI faJBjXki8JUYemifCvJbs0patXUfOc9JPc8otxo/bZWpPHG47hCAGBG+e5Orliyo/r+V eZhxBbqCvEg115qUb5c+hWVLls5hkYsgGuorvAuQDZuaKMahM/21DEDwbcx7sivCCaBf VHdezmXmRU4RvrTonBen1t8DTqfjtVXo8+EfgIHWXimDj30N+3j5UT9S7LDoqZR/GyfI MCwA== X-Forwarded-Encrypted: i=1; AJvYcCUGMzlGQj/mS0UWmJv9/HLxG7wuVi/QPrymZjNF1tX3dLsPXKyz12XWzMSBImIaJxUSew/B22NiHtap6PNb5WUhKLU60tm9wxiUdyvs25Vk X-Gm-Message-State: AOJu0YxMa2vnnsAQrJOlbPtdsC1FWSNjmotyhWBRuDf8PzJY1IZcdI5J hFs2xE1MYb1UpNclyWP6XOeiYH4ws3slDSBsxtDm3VeT9B+BqccRI9bKvmaaZnx60uIRDPFvZnh nfkjtKafQ0Xs6QDc9p9nj4UBfAhmSTMa9PheirGKBcHEhr1xOBbiA74Zbs+upoFXZtw== X-Received: by 2002:a05:6214:4598:b0:696:7b32:cceb with SMTP id op24-20020a056214459800b006967b32ccebmr13869561qvb.6.1711553024696; Wed, 27 Mar 2024 08:23:44 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFS0Dr3UNRPnI4el/D5x8gcrtCXLMsQwIvd4dOzHJoXuoNg03xhCSCfsu9VpmJMAiYn3G+95A== X-Received: by 2002:a05:6214:4598:b0:696:7b32:cceb with SMTP id op24-20020a056214459800b006967b32ccebmr13869513qvb.6.1711553024070; Wed, 27 Mar 2024 08:23:44 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id hu4-20020a056214234400b00690dd47a41csm6412639qvb.86.2024.03.27.08.23.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Mar 2024 08:23:43 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Yang Shi , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Michael Ellerman , peterx@redhat.com, Andrew Jones , Muchun Song , linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Christophe Leroy , Andrew Morton , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox , Rik van Riel , linux-arm-kernel@lists.infradead.org, Andrea Arcangeli , David Hildenbrand , "Aneesh Kumar K . V" , Vlastimil Babka , James Houghton , Jason Gunthorpe , Mike Rapoport , Axel Rasmussen Subject: [PATCH v4 04/13] mm: Introduce vma_pgtable_walk_{begin|end}() Date: Wed, 27 Mar 2024 11:23:23 -0400 Message-ID: <20240327152332.950956-5-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240327152332.950956-1-peterx@redhat.com> References: <20240327152332.950956-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240327_152348_383958_295A1C45 X-CRM114-Status: GOOD ( 11.30 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Peter Xu Introduce per-vma begin()/end() helpers for pgtable walks. This is a preparation work to merge hugetlb pgtable walkers with generic mm. The helpers need to be called before and after a pgtable walk, will start to be needed if the pgtable walker code supports hugetlb pages. It's a hook point for any type of VMA, but for now only hugetlb uses it to stablize the pgtable pages from getting away (due to possible pmd unsharing). Reviewed-by: Christoph Hellwig Reviewed-by: Muchun Song Signed-off-by: Peter Xu --- include/linux/mm.h | 3 +++ mm/memory.c | 12 ++++++++++++ 2 files changed, 15 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index afe27ff3fa94..d8f78017d271 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -4233,4 +4233,7 @@ static inline bool pfn_is_unaccepted_memory(unsigned long pfn) return range_contains_unaccepted_memory(paddr, paddr + PAGE_SIZE); } +void vma_pgtable_walk_begin(struct vm_area_struct *vma); +void vma_pgtable_walk_end(struct vm_area_struct *vma); + #endif /* _LINUX_MM_H */ diff --git a/mm/memory.c b/mm/memory.c index 3d0c0cc33c57..27d173f9a521 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6438,3 +6438,15 @@ void ptlock_free(struct ptdesc *ptdesc) kmem_cache_free(page_ptl_cachep, ptdesc->ptl); } #endif + +void vma_pgtable_walk_begin(struct vm_area_struct *vma) +{ + if (is_vm_hugetlb_page(vma)) + hugetlb_vma_lock_read(vma); +} + +void vma_pgtable_walk_end(struct vm_area_struct *vma) +{ + if (is_vm_hugetlb_page(vma)) + hugetlb_vma_unlock_read(vma); +} From patchwork Wed Mar 27 15:23:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13606841 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E79CEC47DD9 for ; Wed, 27 Mar 2024 15:24:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=XLh8A+xDcp+CSOEy45Lw5OWa17fp9NedIygzPyqhlSA=; b=hqKq8C/TMdOdLi vZ9UoL37sWBhXYsPnklic8ymC+q5LS6f3J3qnNS1Jf9Lxfqp9z0JUd8UXGhAg0Poz2K1C4xBM/65w 1UZLcz3xUlWUupiWCVlFyzT/pxMcKpC5Q5Iq1a1Zmlu0w5016lVZRgeMPfFtftlwdhIwG9dse7MRt L+SuOG0SLXy8B8/JxIlwLRVQ6Mm2f/QajRMavWxvIPK2vi6dF/e4BY6hRKk94jgxfqNhFRycQGpY3 gUA03aKhQmIwtmt7P0qgtSaj+n2OGHD0lfGcvik1uS/GSuGfQFcZnkzyMg9fnNdoDRddnp0qe6Fbl YDX7M/OdiNjKBtvWG6gg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV8c-00000009lo6-2zlG; Wed, 27 Mar 2024 15:24:26 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV82-00000009lKK-0kxE for linux-riscv@lists.infradead.org; Wed, 27 Mar 2024 15:23:54 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553029; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VkfUQlJiw2YmDi669w4oXgCkjfbnGDZMVpc2LJneNK4=; b=AVsJQ3oxZori/0jbiU/hEUHGTPyOIfF2KtXrxRs+X9qPY6bpKKg3pU3mS80iG+/rUMPxLq aQKTRNxV677HRcYDhyG05TlV3K7yqcgzl97J0robQAfWSrJEq7Qlz53NpFNy5DqVt8zadu G+7oNcVo6BMi+6axzYrp57Fo5oxz+yY= Received: from mail-qv1-f72.google.com (mail-qv1-f72.google.com [209.85.219.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-633-JGIiTBSDOVCpjxwsl8XamA-1; Wed, 27 Mar 2024 11:23:47 -0400 X-MC-Unique: JGIiTBSDOVCpjxwsl8XamA-1 Received: by mail-qv1-f72.google.com with SMTP id 6a1803df08f44-690c19ee50bso2421816d6.0 for ; Wed, 27 Mar 2024 08:23:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711553027; x=1712157827; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VkfUQlJiw2YmDi669w4oXgCkjfbnGDZMVpc2LJneNK4=; b=MvwvRG/0h1JQdD3rMBnBRO4QLNe8ibPMpSBMkoWPFkDJ5daRKtudL8qyp7F/QYsQu7 wPUZCfotSwYQ8aN4BX/UtY7rgyIUUYU47+ZWzatfTHFp5+XtQQj6g945ujMVm0lp85iw QW8f35hzMFfrfNtMZgMxizxnJ0HmnLO9ZSSWCnYsd00f+NV5s14aeqCOTW4qrIbxi2up b63plXAiw7AMaAMBloIdZHbIXw+PK8L0/LqpDYF+M7QqW9wBg5ALPHjRCL3F5O0EGqbQ Peh8uiXmZdceoI0ZGcroKJSsT4Cb5Qa9TDmfNPCynbbkv/+uJjAsmPZU8MNNBHXqLk/u vblw== X-Forwarded-Encrypted: i=1; AJvYcCX77QGN711OUJjXtMiuuJDLo5/ju3WLxapSjEyqi3ncpXeptUAx2ZzZf7vpJ+ATmhb1cmrE6iVE9PiXpdQA02vk0LgCszfS/RZcUw/f+1bA X-Gm-Message-State: AOJu0YwXhNJa1Ln7dBYPld/JRVKl800jT+Vucf+3pLQgY6GWiIsm7jHs zaJoovJvfB0HesTBnzg6WqCW1RlTNViHqUyi+Pm8UBkXmY1/syU0aPHaM5fVdIWUIeD+KaZczwv eTaxPiNNwobROKBAa78I3ZMYa8vMHMJKfVYZxH39IOxygeeFbOwpxkkXy+aAj38O6vA== X-Received: by 2002:a05:6214:3105:b0:696:6f59:4d19 with SMTP id ks5-20020a056214310500b006966f594d19mr14917309qvb.1.1711553026951; Wed, 27 Mar 2024 08:23:46 -0700 (PDT) X-Google-Smtp-Source: AGHT+IG/cvaZbNzejgmgOgxRck5pBSr/g3hsLSZETHWpBhBPVR+VZkVf7lYhXdSr9+e3t1xSA2Wfog== X-Received: by 2002:a05:6214:3105:b0:696:6f59:4d19 with SMTP id ks5-20020a056214310500b006966f594d19mr14917279qvb.1.1711553026413; Wed, 27 Mar 2024 08:23:46 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id hu4-20020a056214234400b00690dd47a41csm6412639qvb.86.2024.03.27.08.23.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Mar 2024 08:23:45 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Yang Shi , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Michael Ellerman , peterx@redhat.com, Andrew Jones , Muchun Song , linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Christophe Leroy , Andrew Morton , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox , Rik van Riel , linux-arm-kernel@lists.infradead.org, Andrea Arcangeli , David Hildenbrand , "Aneesh Kumar K . V" , Vlastimil Babka , James Houghton , Jason Gunthorpe , Mike Rapoport , Axel Rasmussen Subject: [PATCH v4 05/13] mm/arch: Provide pud_pfn() fallback Date: Wed, 27 Mar 2024 11:23:24 -0400 Message-ID: <20240327152332.950956-6-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240327152332.950956-1-peterx@redhat.com> References: <20240327152332.950956-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240327_082350_445291_D05517D5 X-CRM114-Status: GOOD ( 12.61 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Peter Xu The comment in the code explains the reasons. We took a different approach comparing to pmd_pfn() by providing a fallback function. Another option is to provide some lower level config options (compare to HUGETLB_PAGE or THP) to identify which layer an arch can support for such huge mappings. However that can be an overkill. Cc: Mike Rapoport (IBM) Cc: Matthew Wilcox Reviewed-by: Jason Gunthorpe Signed-off-by: Peter Xu Signed-off-by: Peter Xu --- arch/riscv/include/asm/pgtable.h | 1 + arch/s390/include/asm/pgtable.h | 1 + arch/sparc/include/asm/pgtable_64.h | 1 + arch/x86/include/asm/pgtable.h | 1 + include/linux/pgtable.h | 10 ++++++++++ 5 files changed, 14 insertions(+) diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index 20242402fc11..0ca28cc8e3fa 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -646,6 +646,7 @@ static inline unsigned long pmd_pfn(pmd_t pmd) #define __pud_to_phys(pud) (__page_val_to_pfn(pud_val(pud)) << PAGE_SHIFT) +#define pud_pfn pud_pfn static inline unsigned long pud_pfn(pud_t pud) { return ((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT); diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h index 1a71cb19c089..6cbbe473f680 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -1414,6 +1414,7 @@ static inline unsigned long pud_deref(pud_t pud) return (unsigned long)__va(pud_val(pud) & origin_mask); } +#define pud_pfn pud_pfn static inline unsigned long pud_pfn(pud_t pud) { return __pa(pud_deref(pud)) >> PAGE_SHIFT; diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h index 4d1bafaba942..26efc9bb644a 100644 --- a/arch/sparc/include/asm/pgtable_64.h +++ b/arch/sparc/include/asm/pgtable_64.h @@ -875,6 +875,7 @@ static inline bool pud_leaf(pud_t pud) return pte_val(pte) & _PAGE_PMD_HUGE; } +#define pud_pfn pud_pfn static inline unsigned long pud_pfn(pud_t pud) { pte_t pte = __pte(pud_val(pud)); diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index cefc7a84f7a4..273f7557218c 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -234,6 +234,7 @@ static inline unsigned long pmd_pfn(pmd_t pmd) return (pfn & pmd_pfn_mask(pmd)) >> PAGE_SHIFT; } +#define pud_pfn pud_pfn static inline unsigned long pud_pfn(pud_t pud) { phys_addr_t pfn = pud_val(pud); diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 600e17d03659..75fe309a4e10 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1817,6 +1817,16 @@ typedef unsigned int pgtbl_mod_mask; #define pte_leaf_size(x) PAGE_SIZE #endif +/* + * We always define pmd_pfn for all archs as it's used in lots of generic + * code. Now it happens too for pud_pfn (and can happen for larger + * mappings too in the future; we're not there yet). Instead of defining + * it for all archs (like pmd_pfn), provide a fallback. + */ +#ifndef pud_pfn +#define pud_pfn(x) ({ BUILD_BUG(); 0; }) +#endif + /* * Some architectures have MMUs that are configurable or selectable at boot * time. These lead to variable PTRS_PER_x. For statically allocated arrays it From patchwork Wed Mar 27 15:23:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13606843 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A5D4CC47DD9 for ; Wed, 27 Mar 2024 15:24:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=TSEhDpEDNfwFY9cCXRSiefJcgbMduqAbFBujcXIB1PY=; b=uuQCsosrdOKTeV x/4K91otl4fiS8miCBV3MftdRp93DexeLkdo+VuYOauy7kHPpmEiSaezEtrCX/4NLGwcYL8kxXKiX l0hSBonua/kGkD4FE7GkSgJc9nu7HUuHC9F0zACkwO/STrJ13kSR1SDmxLot9yJjnRQvTUAWUzLHV zswLk7YnzmAwDqtzVVvpN3g+uewZ9NZJDlsfjncwE4LFksuYor+GrXq8rJJxz/acg3QL+fx99a2h9 j0IOnCQB52Ucb1PDSb29Ua93bUdfP5gtF0Y6v6ZEMD1Tl7m3YXRskQlsfAJhCpvdriRy2Da+sweGg IZUIwYENV4tr0Iw9JFvg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV8z-00000009m38-07cZ; Wed, 27 Mar 2024 15:24:49 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV89-00000009lRY-1RlT for linux-riscv@bombadil.infradead.org; Wed, 27 Mar 2024 15:23:57 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Content-Transfer-Encoding :MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Sender:Reply-To:Content-ID:Content-Description; bh=gD68z8/n0HSMW69QF19tjjGBtDgz4n3uypdcneHmRPc=; b=FuBiZQjP59v4kXTXfXU/PkGbt5 pnFjgPZzCDl7EALqxJhkijIjYUDQOjiNx6EWqoAsYdJsPbXUAfHxVc+Za0TG6LOenK5xMRwgzOfOj +lGlcS1KUWsIkO812zW4iXVYLkKQgQ3tDR6jnAEpdYzb74QdFgYh+0sQix3OrSUL/5O22rUUhXrkk H4MG5y1xkc7e+YRQCFFy3lUKzc1FB4tbR3yNCBjBhmtIW3NeIV76UhL2zqRb+lrR1FfC1fe5b1wXX PYTaJHNRhe9Av+FOfPYn54wAk5wO0akcN7XTm3Kk8mFGPzBBjx22n3PucNS0uR1wX+Kl0yz/kmltn BTM7cz1Q==; Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by desiato.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV84-00000000L1f-020P for linux-riscv@lists.infradead.org; Wed, 27 Mar 2024 15:23:56 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553030; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gD68z8/n0HSMW69QF19tjjGBtDgz4n3uypdcneHmRPc=; b=URgRD63zRlDW9Tzk+yUQ6m3s3RquSQHPR4IFpodqqffSs0LHiUOuLjiK6wv7x74UWV7Q2Y 2SpzviHmAdtRg0QSfjeXuXA2cdvaXc7EASWvJ+xiwT5C3UfhzBpNDl0zarUubyu3//ICWm vVfDBpawqWWrgLfaqylb7D5lbY1wNxM= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553031; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gD68z8/n0HSMW69QF19tjjGBtDgz4n3uypdcneHmRPc=; b=MmVWvOTmTU6nujh5vqCWOvh40F2hfZGnujaJOTLKSL1yFnAGxQ4g3uLfFiByr/zlkmcOQY ilQbcnWPImt4PEV4bgatJGE6JiNqgsB0sGlqJhncvrw/Z5NtDTHzinR+sWw6JzChrB51nU VvMjCL3Zn3JP7dFwbrsQZySMmR31jtA= Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com [209.85.219.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-308-khkLlAw3MGSbmsLAozcYjA-1; Wed, 27 Mar 2024 11:23:49 -0400 X-MC-Unique: khkLlAw3MGSbmsLAozcYjA-1 Received: by mail-qv1-f71.google.com with SMTP id 6a1803df08f44-690c19ee50bso2422106d6.0 for ; Wed, 27 Mar 2024 08:23:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711553029; x=1712157829; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gD68z8/n0HSMW69QF19tjjGBtDgz4n3uypdcneHmRPc=; b=RH/ApTou+/uJfJ1BSGEiiEDejJMMxTiRE8i/L+DIQltd38PqLC/3psd2wAC+HPAMKc cpyzjEbilX+LIK1ZJXFDDVL0j03lazwtGTlgek1I+m6iz+EtwP/nuHuNSivtHYNm7bho 11HHXOm40BabZpvjitkd5UXsfGFRKEQPeBhSAeeb54GfDYtWgI8eIczTDYvkIsKADypb i+N+/ZXP8WWEub8q+ZbhiHmZiocnlGuXOkBr0K8QTDcu80JqlpW/qTV1VsFYU7MYQnnj Wu2K0SzYNrzU6KIzNxFstsmwzT9ahXYeDKBdtWgldKFM3fORsRtFZyfRxBGoZS5XTrx/ kY+w== X-Forwarded-Encrypted: i=1; AJvYcCWN9fkudMXHoAsB2XRinIwm4yGdA+GO9EzSix8FT8T6eYlXPL8QrBNX0//H5FcXJnGij9BI31Z3Rp9AX/RL9m+UqSF0yqssgoWNnfSuwyEV X-Gm-Message-State: AOJu0YwwfeAGOh15ttqM0HCrnaAwJ+A8mYZlVcstYpwoo6LLgDZKKZpw uB2xwYPalGgZh/jmw1TiNtpbdk8HzMiVHWQPwHcQ6NLjeiL0sPt0VAiJwOt5XK2gwMpw6BuoZsx DvIzC/11TqYsl963mMcPfFpirlLd/M08MB/tPXaH1JIHKz5pzI5NStLmNqKxL7xFvtA== X-Received: by 2002:a05:6214:3a01:b0:696:81b8:a462 with SMTP id nw1-20020a0562143a0100b0069681b8a462mr13234447qvb.0.1711553028762; Wed, 27 Mar 2024 08:23:48 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFaIIFSwEPr6utJECaPIJXCgmHTkHVcLUQEzRJpciko/79sjH/yOlRx7KIof08fJkHOKzrhqQ== X-Received: by 2002:a05:6214:3a01:b0:696:81b8:a462 with SMTP id nw1-20020a0562143a0100b0069681b8a462mr13234423qvb.0.1711553028305; Wed, 27 Mar 2024 08:23:48 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id hu4-20020a056214234400b00690dd47a41csm6412639qvb.86.2024.03.27.08.23.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Mar 2024 08:23:47 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Yang Shi , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Michael Ellerman , peterx@redhat.com, Andrew Jones , Muchun Song , linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Christophe Leroy , Andrew Morton , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox , Rik van Riel , linux-arm-kernel@lists.infradead.org, Andrea Arcangeli , David Hildenbrand , "Aneesh Kumar K . V" , Vlastimil Babka , James Houghton , Jason Gunthorpe , Mike Rapoport , Axel Rasmussen Subject: [PATCH v4 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing Date: Wed, 27 Mar 2024 11:23:25 -0400 Message-ID: <20240327152332.950956-7-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240327152332.950956-1-peterx@redhat.com> References: <20240327152332.950956-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240327_152353_770969_7B753F79 X-CRM114-Status: GOOD ( 15.57 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Peter Xu Hugepd format for GUP is only used in PowerPC with hugetlbfs. There are some kernel usage of hugepd (can refer to hugepd_populate_kernel() for PPC_8XX), however those pages are not candidates for GUP. Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to file-backed mappings") added a check to fail gup-fast if there's potential risk of violating GUP over writeback file systems. That should never apply to hugepd. Considering that hugepd is an old format (and even software-only), there's no plan to extend hugepd into other file typed memories that is prone to the same issue. Drop that check, not only because it'll never be true for hugepd per any known plan, but also it paves way for reusing the function outside fast-gup. To make sure we'll still remember this issue just in case hugepd will be extended to support non-hugetlbfs memories, add a rich comment above gup_huge_pd(), explaining the issue with proper references. Cc: Christoph Hellwig Cc: Lorenzo Stoakes Cc: Michael Ellerman Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: Peter Xu --- mm/gup.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index e7510b6ce765..db35b056fc9a 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2832,11 +2832,6 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, return 0; } - if (!folio_fast_pin_allowed(folio, flags)) { - gup_put_folio(folio, refs, flags); - return 0; - } - if (!pte_write(pte) && gup_must_unshare(NULL, flags, &folio->page)) { gup_put_folio(folio, refs, flags); return 0; @@ -2847,6 +2842,14 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, return 1; } +/* + * NOTE: currently GUP for a hugepd is only possible on hugetlbfs file + * systems on Power, which does not have issue with folio writeback against + * GUP updates. When hugepd will be extended to support non-hugetlbfs or + * even anonymous memory, we need to do extra check as what we do with most + * of the other folios. See writable_file_mapping_allowed() and + * folio_fast_pin_allowed() for more information. + */ static int gup_huge_pd(hugepd_t hugepd, unsigned long addr, unsigned int pdshift, unsigned long end, unsigned int flags, struct page **pages, int *nr) From patchwork Wed Mar 27 15:23:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13606842 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B24C8CD1287 for ; Wed, 27 Mar 2024 15:24:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=AwFfK9DMqWpoyG/dQIXxgb6YPdUH39N3L60xhb6UPCQ=; b=gKR29UZEAuxkzS hUeLkC/oJZsMZYD2LESnGcPFqfyyXFilveqz44735rR4w+f5CgqEmp6zrdIm0RHXd9ngPOlJcmgM4 uHmtyaCcCFVpjZwh7YuqeUWbiICWvrazy9ME7A73iKIK9vjFg2mhIgZ4PTQbMPmAIxwVO+wWXD2cw 4ypVaww9x02wil2wIf7FeX/KEL7pSC3li0mu/WhEFAM3FUMAxju5xbtKAFEbmz5/cfA/+J92URTOs Cbs60tZ3hHkNLR/qXow50GEFjrMoIuLeGlb8O1W28zfY1i2PMk3h6jMTZgoLoyS83Ki/Iz71ntE8N woUqFVRim3AANs0gyBUQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV91-00000009m55-1k0k; Wed, 27 Mar 2024 15:24:51 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV8B-00000009lU8-1rwy for linux-riscv@bombadil.infradead.org; Wed, 27 Mar 2024 15:23:59 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Content-Transfer-Encoding :MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Sender:Reply-To:Content-ID:Content-Description; bh=QDTtuz/iBLnx3uLIw0aNQbjjBf12gK6PRqFn//yUSp8=; b=j7xhIlOY5EqvL7StU48eD4sq5C 1Dd4XPcLoGjwQD/qyVgVP+38t0YmDXOlBVhHqhg3g7qiIzPj21nnHNEvH/xDveO1viuqnaTc26Zo9 gI6Whcfz0WbANgXuyd4FfIK4kyNT/hAqrk1IFV9e4q7QOy7HIiq2uxuf3AgxBQ17wQm6wKjbTqjA5 tI8USd7L9k3uG2B18hg17QdYHgw0/mDbMs1Qfy7kaLfsv4iIkT5+GJZOiYEdjf3UAg989xEdhW7rU bOanZFWJEkMksfmodAFk4AX8oEzmZ3hwKIMradlxbQw4baKk32oF3mBCHqogFFqGg+EMrOldkuTMY /ynoJtWA==; Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by desiato.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV86-00000000L21-38bC for linux-riscv@lists.infradead.org; Wed, 27 Mar 2024 15:23:57 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553033; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QDTtuz/iBLnx3uLIw0aNQbjjBf12gK6PRqFn//yUSp8=; b=NB2Jw+1/zvAUDFua4KIxw3i7AD/PZA6GWDGWzJ0BOo8NhKNLhoVNZ4t5ZOYTqMM4jwjott akyruVwwIJz9HdM2lfZuNGu9BD+q/8XAPf8tG0uVT5Lz2v4TIIbBqJ0lsIq6yzdmNVIk/C k2fko7R2N/jgiNf81F2rjkwGTTTbOWo= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553033; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QDTtuz/iBLnx3uLIw0aNQbjjBf12gK6PRqFn//yUSp8=; b=NB2Jw+1/zvAUDFua4KIxw3i7AD/PZA6GWDGWzJ0BOo8NhKNLhoVNZ4t5ZOYTqMM4jwjott akyruVwwIJz9HdM2lfZuNGu9BD+q/8XAPf8tG0uVT5Lz2v4TIIbBqJ0lsIq6yzdmNVIk/C k2fko7R2N/jgiNf81F2rjkwGTTTbOWo= Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com [209.85.219.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-491-TGXJ9nKbNOGvS-3udUDLhQ-1; Wed, 27 Mar 2024 11:23:51 -0400 X-MC-Unique: TGXJ9nKbNOGvS-3udUDLhQ-1 Received: by mail-qv1-f71.google.com with SMTP id 6a1803df08f44-696a5972507so4146156d6.0 for ; Wed, 27 Mar 2024 08:23:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711553031; x=1712157831; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QDTtuz/iBLnx3uLIw0aNQbjjBf12gK6PRqFn//yUSp8=; b=V7n3iRl/8xr/Q0Kd8u5xSazmjsYTGEBC+4P/2qtKk/Zv2HdK9xUHFawgDxLmTXLUoa KHzqe9x0pf5fKcX3jhbfFtUqCJb6phJekxw9fuNOBSCyayJZVulF15zNLya695Enkkz1 1m241qjr4bCw04fQQrL/nA525Z2uLpvhfY9ZShnv9vGY23KMng+AN167ZeN8C6Z+2qT6 85EPePUnUoYj7JJ6b26rZ7OoMwdKgmQAHbhEWADSBxZPQ5RBL/4WdVcfN8HF/kERvdm/ pGC+31g8xdHU/8ocov3W/svz99LRnvhYFvsZVO1Xc4pz60QfhBhXIOaMYtmFX6cgzn2x zsYg== X-Forwarded-Encrypted: i=1; AJvYcCWgHmR6QreMorQs60alZQWDa0CsfEA1qPr8ACZTH/2tbclfTYUZLUU/zpRta/tp7mgcHGd0ZYS14+dDn2zsOvg3skD5hTPvnjfVFlVhnRK8 X-Gm-Message-State: AOJu0Yxb2ZbBweVFuvzDDZxIL7ugE1Ev/vK8/3by5qA514khvoGX8AtQ foJQIijNKi2sj+sBL0Qvt4JaLLLHCSS73PLqAJ62+HMLbnagTujnaTCOhmbAoynMNiZE4hPMI+s fayFOgAH/TY2aweESOkg/a6h468Rwhqmj1mD+G6NaWC2MxfvLMjnB1pJPgyj2RKW/xw== X-Received: by 2002:a05:6214:2b86:b0:690:de72:316f with SMTP id kr6-20020a0562142b8600b00690de72316fmr14582149qvb.1.1711553030740; Wed, 27 Mar 2024 08:23:50 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEWXp2F9loH7HSDfuOw6HflxlC5J0wiBYnMNwM1iqpFTIhT0c0cqQhWkIb89hM3i5euenCqpA== X-Received: by 2002:a05:6214:2b86:b0:690:de72:316f with SMTP id kr6-20020a0562142b8600b00690de72316fmr14582105qvb.1.1711553030174; Wed, 27 Mar 2024 08:23:50 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id hu4-20020a056214234400b00690dd47a41csm6412639qvb.86.2024.03.27.08.23.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Mar 2024 08:23:49 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Yang Shi , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Michael Ellerman , peterx@redhat.com, Andrew Jones , Muchun Song , linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Christophe Leroy , Andrew Morton , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox , Rik van Riel , linux-arm-kernel@lists.infradead.org, Andrea Arcangeli , David Hildenbrand , "Aneesh Kumar K . V" , Vlastimil Babka , James Houghton , Jason Gunthorpe , Mike Rapoport , Axel Rasmussen Subject: [PATCH v4 07/13] mm/gup: Refactor record_subpages() to find 1st small page Date: Wed, 27 Mar 2024 11:23:26 -0400 Message-ID: <20240327152332.950956-8-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240327152332.950956-1-peterx@redhat.com> References: <20240327152332.950956-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240327_152355_161781_EB13244E X-CRM114-Status: GOOD ( 10.85 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Peter Xu All the fast-gup functions take a tail page to operate, always need to do page mask calculations before feeding that into record_subpages(). Merge that logic into record_subpages(), so that it will do the nth_page() calculation. Reviewed-by: Jason Gunthorpe Signed-off-by: Peter Xu --- mm/gup.c | 25 ++++++++++++++----------- 1 file changed, 14 insertions(+), 11 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index db35b056fc9a..c2881772216b 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2779,13 +2779,16 @@ static int __gup_device_huge_pud(pud_t pud, pud_t *pudp, unsigned long addr, } #endif -static int record_subpages(struct page *page, unsigned long addr, - unsigned long end, struct page **pages) +static int record_subpages(struct page *page, unsigned long sz, + unsigned long addr, unsigned long end, + struct page **pages) { + struct page *start_page; int nr; + start_page = nth_page(page, (addr & (sz - 1)) >> PAGE_SHIFT); for (nr = 0; addr != end; nr++, addr += PAGE_SIZE) - pages[nr] = nth_page(page, nr); + pages[nr] = nth_page(start_page, nr); return nr; } @@ -2820,8 +2823,8 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, /* hugepages are never "special" */ VM_BUG_ON(!pfn_valid(pte_pfn(pte))); - page = nth_page(pte_page(pte), (addr & (sz - 1)) >> PAGE_SHIFT); - refs = record_subpages(page, addr, end, pages + *nr); + page = pte_page(pte); + refs = record_subpages(page, sz, addr, end, pages + *nr); folio = try_grab_folio(page, refs, flags); if (!folio) @@ -2894,8 +2897,8 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, pages, nr); } - page = nth_page(pmd_page(orig), (addr & ~PMD_MASK) >> PAGE_SHIFT); - refs = record_subpages(page, addr, end, pages + *nr); + page = pmd_page(orig); + refs = record_subpages(page, PMD_SIZE, addr, end, pages + *nr); folio = try_grab_folio(page, refs, flags); if (!folio) @@ -2938,8 +2941,8 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, pages, nr); } - page = nth_page(pud_page(orig), (addr & ~PUD_MASK) >> PAGE_SHIFT); - refs = record_subpages(page, addr, end, pages + *nr); + page = pud_page(orig); + refs = record_subpages(page, PUD_SIZE, addr, end, pages + *nr); folio = try_grab_folio(page, refs, flags); if (!folio) @@ -2978,8 +2981,8 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr, BUILD_BUG_ON(pgd_devmap(orig)); - page = nth_page(pgd_page(orig), (addr & ~PGDIR_MASK) >> PAGE_SHIFT); - refs = record_subpages(page, addr, end, pages + *nr); + page = pgd_page(orig); + refs = record_subpages(page, PGDIR_SIZE, addr, end, pages + *nr); folio = try_grab_folio(page, refs, flags); if (!folio) From patchwork Wed Mar 27 15:23:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13606844 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6C0F1C47DD9 for ; Wed, 27 Mar 2024 15:25:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=OLHRjXB+5qk+bG2Q3Do4C1vPT9ZwQ6HWYmlBCp3LybI=; b=pYNlrRHR/1I5II PDx1ipTo1ckFfDRaMWNH98fjwrS/WS+87E+9jLmCANvx8AHlaAJUxf6yZgctJFousdtcTluRE5R1h IcCtgb3VJWoO5yb7inf0Is/ej8C5jlPkhuO/u3FGIfcYtmAzij5qptlFqUaq+2HLeo0HQXAYKvqga xoFPceczn8ih4onywvkrmR8c5M75Sbkr/YqWuqpt836nSF0Sj4LVRCiiQRhDh4rL0F8o3SRMUm5rw 3sMv5Vj4wUHkCw5CD+NP6u+g19LOgdY5PSWdktN93wVlMO7y/yBj8KWZCPhp7KKqFIomU45l5BWMO VSH2YBHHbLpPOJBo6kSA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV9A-00000009mAB-2dja; Wed, 27 Mar 2024 15:25:00 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV88-00000009lQw-2B4c for linux-riscv@lists.infradead.org; Wed, 27 Mar 2024 15:24:02 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553035; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2/UoYTg8lz4bAl/L4a1cbQCJuXO8WYRlAVPmeKHJK2E=; b=De2WPSLDQBlryuPjCl/FjvTSJQdZboyR9+Z0CJEllaeg7GCX/IxfdKPDsm1xQ6TWFSwe4A q577qKZm5SRfqW1bp4tJ0pDkwCWRoVE3teBMkenAQVJtVWUKCTUeL1nWFGmTBVFMP273dx 5/P8nZNjJV8wHbM/Xw7leV7MJJGuCII= Received: from mail-qk1-f199.google.com (mail-qk1-f199.google.com [209.85.222.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-416-r4chMAFtNpazx0-6XlQnKA-1; Wed, 27 Mar 2024 11:23:53 -0400 X-MC-Unique: r4chMAFtNpazx0-6XlQnKA-1 Received: by mail-qk1-f199.google.com with SMTP id af79cd13be357-78a5e62931cso61836485a.0 for ; Wed, 27 Mar 2024 08:23:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711553033; x=1712157833; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2/UoYTg8lz4bAl/L4a1cbQCJuXO8WYRlAVPmeKHJK2E=; b=AT401XH8VnQVkllTZXRqdfhUaD7wHhZDr4BtcvDV2XMcEbiNpSAniSqA0pglvYJLi7 H+vCJlAXFXNbVjIqW3x74txDZLjbxmxNPgmZSZAkHS8To1s45wXDDFrE/MblvjJYe4I5 YXlHpgpyWwzSjdkJtHCPnFxxMv4Kt+vlLsorlyxznslsq5HXcYVefAPEklaEguJbBbcp fauwz7onPIvXfuKnvf+J861jAhz/QMp4oXuLUU+d4yUkJDcZI0LRZfCsB0VnlYj5wNIH jNlxOqcj22fH+3TC/MeoZAofi02sC+noIC3rVKaGA1RkQeO2KKiw53d5g1CnurKWaJnf a28Q== X-Forwarded-Encrypted: i=1; AJvYcCWyyzrxUAqUGdb/+qyfSLiZbhbZS4Ecyy4cUhZbcHssytRTcFLJC7OTfRMKdHgnwI3tpWlpG/qxX9lumF1fEnyMikb0RH563uawQBn9oOwy X-Gm-Message-State: AOJu0YyxUSMUrPTtV/r4hGBdVl9MnDmiJdRO31c6ALiMbnkNIlLYr/Ho 3sTH7QyJKO5x2byqM2961hrIPK4wZvTfCXLywIPOUD4W9i74lnfOSyQqcGxGX0kNGtEGA4Ngbkw peCmlpWJ6lf0bC5L7AYaFJfJhLDMD8F+ojvLXgq7Wp3YanXxu8eM0rhMdl7xujePf4w== X-Received: by 2002:a05:6214:4a8a:b0:696:ad00:7deb with SMTP id pi10-20020a0562144a8a00b00696ad007debmr2078347qvb.2.1711553033234; Wed, 27 Mar 2024 08:23:53 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH8Fh1FPXUmWlSE4Ssu3coIMvek7Et1xEVKx+8QTY39hYxqq3mRFRgZ7bz+i1V9Jmd/zYMv6Q== X-Received: by 2002:a05:6214:4a8a:b0:696:ad00:7deb with SMTP id pi10-20020a0562144a8a00b00696ad007debmr2078325qvb.2.1711553032687; Wed, 27 Mar 2024 08:23:52 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id hu4-20020a056214234400b00690dd47a41csm6412639qvb.86.2024.03.27.08.23.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Mar 2024 08:23:52 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Yang Shi , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Michael Ellerman , peterx@redhat.com, Andrew Jones , Muchun Song , linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Christophe Leroy , Andrew Morton , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox , Rik van Riel , linux-arm-kernel@lists.infradead.org, Andrea Arcangeli , David Hildenbrand , "Aneesh Kumar K . V" , Vlastimil Babka , James Houghton , Jason Gunthorpe , Mike Rapoport , Axel Rasmussen Subject: [PATCH v4 08/13] mm/gup: Handle hugetlb for no_page_table() Date: Wed, 27 Mar 2024 11:23:27 -0400 Message-ID: <20240327152332.950956-9-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240327152332.950956-1-peterx@redhat.com> References: <20240327152332.950956-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240327_082357_082365_B1AE8EA7 X-CRM114-Status: GOOD ( 14.38 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Peter Xu no_page_table() is not yet used for hugetlb code paths. Make it prepared. The major difference here is hugetlb will return -EFAULT as long as page cache does not exist, even if VM_SHARED. See hugetlb_follow_page_mask(). Pass "address" into no_page_table() too, as hugetlb will need it. Reviewed-by: Christoph Hellwig Reviewed-by: Jason Gunthorpe Signed-off-by: Peter Xu --- mm/gup.c | 44 ++++++++++++++++++++++++++------------------ 1 file changed, 26 insertions(+), 18 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index c2881772216b..ef46a7053e16 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -501,19 +501,27 @@ static inline void mm_set_has_pinned_flag(unsigned long *mm_flags) #ifdef CONFIG_MMU static struct page *no_page_table(struct vm_area_struct *vma, - unsigned int flags) + unsigned int flags, unsigned long address) { + if (!(flags & FOLL_DUMP)) + return NULL; + /* - * When core dumping an enormous anonymous area that nobody - * has touched so far, we don't want to allocate unnecessary pages or + * When core dumping, we don't want to allocate unnecessary pages or * page tables. Return error instead of NULL to skip handle_mm_fault, * then get_dump_page() will return NULL to leave a hole in the dump. * But we can only make this optimization where a hole would surely * be zero-filled if handle_mm_fault() actually did handle it. */ - if ((flags & FOLL_DUMP) && - (vma_is_anonymous(vma) || !vma->vm_ops->fault)) + if (is_vm_hugetlb_page(vma)) { + struct hstate *h = hstate_vma(vma); + + if (!hugetlbfs_pagecache_present(h, vma, address)) + return ERR_PTR(-EFAULT); + } else if ((vma_is_anonymous(vma) || !vma->vm_ops->fault)) { return ERR_PTR(-EFAULT); + } + return NULL; } @@ -593,7 +601,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, ptep = pte_offset_map_lock(mm, pmd, address, &ptl); if (!ptep) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); pte = ptep_get(ptep); if (!pte_present(pte)) goto no_page; @@ -685,7 +693,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, pte_unmap_unlock(ptep, ptl); if (!pte_none(pte)) return NULL; - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } static struct page *follow_pmd_mask(struct vm_area_struct *vma, @@ -701,27 +709,27 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, pmd = pmd_offset(pudp, address); pmdval = pmdp_get_lockless(pmd); if (pmd_none(pmdval)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); if (!pmd_present(pmdval)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); if (pmd_devmap(pmdval)) { ptl = pmd_lock(mm, pmd); page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap); spin_unlock(ptl); if (page) return page; - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } if (likely(!pmd_trans_huge(pmdval))) return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); if (pmd_protnone(pmdval) && !gup_can_follow_protnone(vma, flags)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); ptl = pmd_lock(mm, pmd); if (unlikely(!pmd_present(*pmd))) { spin_unlock(ptl); - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } if (unlikely(!pmd_trans_huge(*pmd))) { spin_unlock(ptl); @@ -752,17 +760,17 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma, pud = pud_offset(p4dp, address); if (pud_none(*pud)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); if (pud_devmap(*pud)) { ptl = pud_lock(mm, pud); page = follow_devmap_pud(vma, address, pud, flags, &ctx->pgmap); spin_unlock(ptl); if (page) return page; - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } if (unlikely(pud_bad(*pud))) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); return follow_pmd_mask(vma, address, pud, flags, ctx); } @@ -777,10 +785,10 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma, p4dp = p4d_offset(pgdp, address); p4d = READ_ONCE(*p4dp); if (!p4d_present(p4d)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); BUILD_BUG_ON(p4d_leaf(p4d)); if (unlikely(p4d_bad(p4d))) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); return follow_pud_mask(vma, address, p4dp, flags, ctx); } @@ -830,7 +838,7 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, pgd = pgd_offset(mm, address); if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); return follow_p4d_mask(vma, address, pgd, flags, ctx); } From patchwork Wed Mar 27 15:23:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13606848 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AE342C47DD9 for ; Wed, 27 Mar 2024 15:27:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=AqDV2D91ZFxMKf5iyspCJpuJaCTkbFx5kcql0aKzIN8=; b=yRYKEEFipEYOlX f/zCShNAk70lQt2VUHSNQKswhWo0J4KJBSFGrNK6n0yc59OEsODllfgt2PqHYsWxCFYPRqDEsUpSq qA5Ux3vmTmZbgVjxqZETn3C5nQ8303E5VnKReAHHodlzyXN3xuZ0aJMBn8sGeSabGZNys5Q8wbhah 8Ekymeu5htzmJEYF9ki2x0Zd8VYOrrCg+pwx+0rahx+/8EfgiZLzxkPSWL/DlxRtXm0uLWJ2xvt/t jl1XafA2Ri8VIhgRKK0wBhT/eyq+6smGwZJSThiGQGMHtp38zk39yYffQxMokQhRL5bGHeCPeRs/0 Lr6ObpTT+mPv7vSv9jwA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpVBA-00000009nIL-37he; Wed, 27 Mar 2024 15:27:04 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV8F-00000009lYG-0HP4 for linux-riscv@bombadil.infradead.org; Wed, 27 Mar 2024 15:24:03 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Content-Transfer-Encoding :MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Sender:Reply-To:Content-ID:Content-Description; bh=0ClTZB7edrSwahK3rulsC63VROBcD1QOdNuRXa61Ddg=; b=P/pgUn6Colzv/0Sc+BviHQsBqX e6BPlnCBA0pq8hKHb3aUqYsvz3YrS4DkNKq96eM4CKt8d0UnF1vBq1LGBiYn15YN9uFYf6lipBEgm Prqc7xySuRPxBSputKrO+nV2+Lw1Q8XqrTSODdr/BW6q5LEhZQO2xA03a+N9zHWuzANxsAWyn+vL2 QdffFlZWUJvdPcIfouYwv1WhR3mejWojGpnTxG712cTWbv3BXMoQDtJDcjJrpBlF0SgYBZa4+TcRz f8a1mPI+a3ahwxf1uT5ZdpRLzGOLaX9wwuYWxxtZGeQn8pIwVL5YQVMX7c2kDuh/nwpsy2RRy2mxI q5/+omAg==; Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by desiato.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV8B-00000000L3l-0weD for linux-riscv@lists.infradead.org; Wed, 27 Mar 2024 15:24:01 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553037; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0ClTZB7edrSwahK3rulsC63VROBcD1QOdNuRXa61Ddg=; b=eey1M4R5QdNvghzCyAdpKgxR3UFQrRHlhz9ErR5bJO0cPAVauYXEwkGN6HpoM+aBd2wFoC 9wWwEbFNUz4HELmc1fbrJj3MTjTXVnOwYJobAwPCNJH7XJatZhm9aJmpUAyKkjBVnpaElq OCsQp2h8LA6Ph/R33xnjDmPxFILHH+o= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553037; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0ClTZB7edrSwahK3rulsC63VROBcD1QOdNuRXa61Ddg=; b=eey1M4R5QdNvghzCyAdpKgxR3UFQrRHlhz9ErR5bJO0cPAVauYXEwkGN6HpoM+aBd2wFoC 9wWwEbFNUz4HELmc1fbrJj3MTjTXVnOwYJobAwPCNJH7XJatZhm9aJmpUAyKkjBVnpaElq OCsQp2h8LA6Ph/R33xnjDmPxFILHH+o= Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-7-7i7VfY5CP3er1AhAJ6X8Kg-1; Wed, 27 Mar 2024 11:23:55 -0400 X-MC-Unique: 7i7VfY5CP3er1AhAJ6X8Kg-1 Received: by mail-qv1-f70.google.com with SMTP id 6a1803df08f44-696a5972507so4146526d6.0 for ; Wed, 27 Mar 2024 08:23:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711553035; x=1712157835; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0ClTZB7edrSwahK3rulsC63VROBcD1QOdNuRXa61Ddg=; b=JeOhohwfKFiiSTlL0dUwfx98XJmKdisdukaowUK+zWGxTnuqWrXcpPRRFaWaPV3c+q GWR64pgf4IZv1oqnmbK3w4/KUEtByvfYtsSAJ7Dcusyx0ZypPI2cwECx1oe9+j50JhFi 3Fk6in0t4MVWwX/HOtRq2nUoEUU6SA6lZGiDiTxxQrA6f1bWdiGGUrM9rMGE1zTWTu4C X2lFcIqXmGq+tvP5WZ1aSe6uH+uMA5yGOGu1M+I3RYt03p9Z3HCepTWGDoS2FftAVw/l KO89sz+RDFt1uwD4ys+8Jac8zwp044xHSVczYRcTdWaxYDBx1yJAnxLUmzyXlZQ08Ns0 iaIA== X-Forwarded-Encrypted: i=1; AJvYcCXZG2ccw+yTRufTcLqVye1D8SJjkUFQih+g4qBS7AQNocMUpMbUjYh19UMyclhaPPe/O456I/nEyFMYn11bpKi6VkVlbHWN9ezjAsLfgUWJ X-Gm-Message-State: AOJu0YyxAv86Aj9GjJSq6xmnMovK1q1AMWNkqRVyLIM0hA05szLpxaLv OIoelwAXx0XF9viTIKVk6BLskG6PjkTmtvB6Jv4tNOyw1voMoRHIxINpAG9nKu8YG919IvDuTf8 L84jtbY4yLZKn1T18Y5T5f8nW3BCFrTcldur3PkSWPKQpmwKvhapLyw//wilWNOOH7A== X-Received: by 2002:a05:6214:3d8c:b0:696:1892:c19f with SMTP id om12-20020a0562143d8c00b006961892c19fmr15035548qvb.3.1711553035020; Wed, 27 Mar 2024 08:23:55 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFT3GjlhPAIM0f0tWP3PNax9bFYBunh5winrPqMP8FazkZOSJ/P00xdKeX4lX2Up3S+10Z8OQ== X-Received: by 2002:a05:6214:3d8c:b0:696:1892:c19f with SMTP id om12-20020a0562143d8c00b006961892c19fmr15035509qvb.3.1711553034516; Wed, 27 Mar 2024 08:23:54 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id hu4-20020a056214234400b00690dd47a41csm6412639qvb.86.2024.03.27.08.23.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Mar 2024 08:23:54 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Yang Shi , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Michael Ellerman , peterx@redhat.com, Andrew Jones , Muchun Song , linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Christophe Leroy , Andrew Morton , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox , Rik van Riel , linux-arm-kernel@lists.infradead.org, Andrea Arcangeli , David Hildenbrand , "Aneesh Kumar K . V" , Vlastimil Babka , James Houghton , Jason Gunthorpe , Mike Rapoport , Axel Rasmussen Subject: [PATCH v4 09/13] mm/gup: Cache *pudp in follow_pud_mask() Date: Wed, 27 Mar 2024 11:23:28 -0400 Message-ID: <20240327152332.950956-10-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240327152332.950956-1-peterx@redhat.com> References: <20240327152332.950956-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240327_152359_455896_5DF9DCB6 X-CRM114-Status: GOOD ( 12.78 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Peter Xu Introduce "pud_t pud" in the function, so the code won't dereference *pudp multiple time. Not only because that looks less straightforward, but also because if the dereference really happened, it's not clear whether there can be race to see different *pudp values if it's being modified at the same time. Acked-by: James Houghton Reviewed-by: Jason Gunthorpe Signed-off-by: Peter Xu --- mm/gup.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index ef46a7053e16..26b8cca24077 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -753,26 +753,27 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma, unsigned int flags, struct follow_page_context *ctx) { - pud_t *pud; + pud_t *pudp, pud; spinlock_t *ptl; struct page *page; struct mm_struct *mm = vma->vm_mm; - pud = pud_offset(p4dp, address); - if (pud_none(*pud)) + pudp = pud_offset(p4dp, address); + pud = READ_ONCE(*pudp); + if (pud_none(pud)) return no_page_table(vma, flags, address); - if (pud_devmap(*pud)) { - ptl = pud_lock(mm, pud); - page = follow_devmap_pud(vma, address, pud, flags, &ctx->pgmap); + if (pud_devmap(pud)) { + ptl = pud_lock(mm, pudp); + page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap); spin_unlock(ptl); if (page) return page; return no_page_table(vma, flags, address); } - if (unlikely(pud_bad(*pud))) + if (unlikely(pud_bad(pud))) return no_page_table(vma, flags, address); - return follow_pmd_mask(vma, address, pud, flags, ctx); + return follow_pmd_mask(vma, address, pudp, flags, ctx); } static struct page *follow_p4d_mask(struct vm_area_struct *vma, From patchwork Wed Mar 27 15:23:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13606849 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5D088C47DD9 for ; Wed, 27 Mar 2024 15:27:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=GOG6vWzGKGAXcJzQmMW3/CHECju46r+WmXT1vicYguc=; b=SEXDoXnyZQygYx 5v1a9+7Rhc/dKPlzeX3C2F38jVWh4YxaUSDGl/4UChZKsFDljn0DT1fwwUKPAq47JQTOgx/Gsyg5F R3dNr626aBgXhEXLI1iiJYxTzBAvnItg5cSoRAgnaDydWfmhnRmAwwvL4AUoa3NwqKx1a/+Z9BWhE 2259MMZgg1Hp99zsQZ8akFnBo+HBcxfTqXtZdZNOEZmRMiRpzGLyt2t23a1aYhB20VADHto6HcV37 bBF2VNBE/mlVDbZ4zJPNg9qaSpjsGYIZVO5H/Nlburw0tdWSclxYWtlqI4r5TIyJnKwNUIAbsv6Qu jc4feT9C+VXiEI6bVRIg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpVBH-00000009nOA-3YxF; Wed, 27 Mar 2024 15:27:11 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV8F-00000009lYb-2MZL for linux-riscv@bombadil.infradead.org; Wed, 27 Mar 2024 15:24:03 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Content-Transfer-Encoding :MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Sender:Reply-To:Content-ID:Content-Description; bh=ldvmB2LhfsIRddCCGOUltuspZLyBYJ6Ab85tSc3WzTs=; b=LTUyvo0LRlRATRKNWf/qdTPYBR j+1uij7Oi2bxqohBLcNyaqO5SY2j6OXYrg81sIlyINYWlj+urM1ktGOA3frUu/jA6fbuD+nUXsPf8 58oPDgeNxozzjp62ty3AIN/Zsq2R9QI4PwrrKxYji3bBJRFvl5eeMLtyCIk/sVDLfHTyaRMd4GZGQ 7hwsFQ6qzki6Y1V+KQbzENRLbtrwQvd41SgWoMU0AdtZhZn9Wan0BTYav67/WPJQ5rLMmcbHdcHi+ 0PM2vB0i4Kbyt+kMVZrdHzoqnE1WIDjuvN7leV9Awy6rzNMnj6VfNnaxMiGz1YG2j0PNwGMHTaSti XArfWM4Q==; Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by desiato.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV8C-00000000L59-1IYa for linux-riscv@lists.infradead.org; Wed, 27 Mar 2024 15:24:02 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553039; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ldvmB2LhfsIRddCCGOUltuspZLyBYJ6Ab85tSc3WzTs=; b=BSGJl9wxcjjAjeAVZsrZtKz/inf8oY4eNgmD7SzvW9j8nX1r6qONCLUACsEcD9j+p6vZx5 bJf3lXQ1oePSHc+cLgmQW+EBgjyHATvgXsiQO2kQvT0cwBCVyiWVGXTarhhLhQ+bpSKDBA hZWmAnR3ci3QXQuYPOW46iUQhgBcBsM= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553039; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ldvmB2LhfsIRddCCGOUltuspZLyBYJ6Ab85tSc3WzTs=; b=BSGJl9wxcjjAjeAVZsrZtKz/inf8oY4eNgmD7SzvW9j8nX1r6qONCLUACsEcD9j+p6vZx5 bJf3lXQ1oePSHc+cLgmQW+EBgjyHATvgXsiQO2kQvT0cwBCVyiWVGXTarhhLhQ+bpSKDBA hZWmAnR3ci3QXQuYPOW46iUQhgBcBsM= Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-582-WYPQ8UBSPJKRaHBLQjPilg-1; Wed, 27 Mar 2024 11:23:57 -0400 X-MC-Unique: WYPQ8UBSPJKRaHBLQjPilg-1 Received: by mail-qv1-f69.google.com with SMTP id 6a1803df08f44-69120b349c9so16215756d6.1 for ; Wed, 27 Mar 2024 08:23:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711553037; x=1712157837; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ldvmB2LhfsIRddCCGOUltuspZLyBYJ6Ab85tSc3WzTs=; b=qH4GBThApWTikkHESDOE4wOIljrGVMztBPgoVDWjUqfFXE4LT2liIwXhSymIlkV9ce D5m9iRCb4ggzh8nJdzKW5vKxNTYkBd0qXsbBeLrl7EM6FmRauaA2sVLmLqq0F8OsStf7 z0CxG7Viy+fTZd7cUJW5MOUXC0I/NBXNfD8fRPUc++4eGO3tmUHh0uZwkcCISu+uE8Yw nVQNC5Hx90YIcPRFoXlvhjLwag4gW7MCzYIIgYmOilVysXe0wxV3XwlQqEVP2oElSpH7 z5kQPL2Q1dqUx1nlm/vVSGLdGVCPHe2ZlCKtYYP3Cb7sCOIqlWNczY2CJsaXDpSnqsfV 01ww== X-Forwarded-Encrypted: i=1; AJvYcCX/2D0nVovx0ke2p2P42f8fhKl40nkn2pIPwKFEnOd9qzyMGk38FRvKnCms5iuc8cu+ywq+6/4sV9uuv77pLjzwA37rvHWKxKGPtT4CWxfu X-Gm-Message-State: AOJu0YySxOb6OgqQvnh1jEahhEmvg9VbZc02atGx2FByX6/MnjrHgGK8 w95CeS0zSGUF6Z6fOJEBuRc14M2r0vOWrmiUnHqfOMhEnUXoYzOvTxfCtebkjKVdy6TdqtbfwGZ hhnUiLlOxz/9sKVcDsR8bAj2UOQHEsSDbFEqtvJamv53eVChIuNliSDNzJgwkli3X2g== X-Received: by 2002:a05:6214:3d8c:b0:696:6f95:4421 with SMTP id om12-20020a0562143d8c00b006966f954421mr14385997qvb.1.1711553036999; Wed, 27 Mar 2024 08:23:56 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEnSeTu/B6EMntVMOCEyhcTmSAZJByvhuxAj5BPZo0z5woD7z6vaVN6Kd8/ZBw0a5bsOP9d6g== X-Received: by 2002:a05:6214:3d8c:b0:696:6f95:4421 with SMTP id om12-20020a0562143d8c00b006966f954421mr14385952qvb.1.1711553036462; Wed, 27 Mar 2024 08:23:56 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id hu4-20020a056214234400b00690dd47a41csm6412639qvb.86.2024.03.27.08.23.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Mar 2024 08:23:56 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Yang Shi , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Michael Ellerman , peterx@redhat.com, Andrew Jones , Muchun Song , linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Christophe Leroy , Andrew Morton , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox , Rik van Riel , linux-arm-kernel@lists.infradead.org, Andrea Arcangeli , David Hildenbrand , "Aneesh Kumar K . V" , Vlastimil Babka , James Houghton , Jason Gunthorpe , Mike Rapoport , Axel Rasmussen Subject: [PATCH v4 10/13] mm/gup: Handle huge pud for follow_pud_mask() Date: Wed, 27 Mar 2024 11:23:29 -0400 Message-ID: <20240327152332.950956-11-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240327152332.950956-1-peterx@redhat.com> References: <20240327152332.950956-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240327_152400_601912_71F258DE X-CRM114-Status: GOOD ( 24.12 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Peter Xu Teach follow_pud_mask() to be able to handle normal PUD pages like hugetlb. Rename follow_devmap_pud() to follow_huge_pud() so that it can process either huge devmap or hugetlb. Move it out of TRANSPARENT_HUGEPAGE_PUD and and huge_memory.c (which relies on CONFIG_THP). Switch to pud_leaf() to detect both cases in the slow gup. In the new follow_huge_pud(), taking care of possible CoR for hugetlb if necessary. touch_pud() needs to be moved out of huge_memory.c to be accessable from gup.c even if !THP. Since at it, optimize the non-present check by adding a pud_present() early check before taking the pgtable lock, failing the follow_page() early if PUD is not present: that is required by both devmap or hugetlb. Use pud_huge() to also cover the pud_devmap() case. One more trivial thing to mention is, introduce "pud_t pud" in the code paths along the way, so the code doesn't dereference *pudp multiple time. Not only because that looks less straightforward, but also because if the dereference really happened, it's not clear whether there can be race to see different *pudp values when it's being modified at the same time. Setting ctx->page_mask properly for a PUD entry. As a side effect, this patch should also be able to optimize devmap GUP on PUD to be able to jump over the whole PUD range, but not yet verified. Hugetlb already can do so prior to this patch. Reviewed-by: Jason Gunthorpe Signed-off-by: Peter Xu --- include/linux/huge_mm.h | 8 ----- mm/gup.c | 70 +++++++++++++++++++++++++++++++++++++++-- mm/huge_memory.c | 47 ++------------------------- mm/internal.h | 2 ++ 4 files changed, 71 insertions(+), 56 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index d3bb25c39482..3f36511bdc02 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -351,8 +351,6 @@ static inline bool folio_test_pmd_mappable(struct folio *folio) struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd, int flags, struct dev_pagemap **pgmap); -struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr, - pud_t *pud, int flags, struct dev_pagemap **pgmap); vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf); @@ -507,12 +505,6 @@ static inline struct page *follow_devmap_pmd(struct vm_area_struct *vma, return NULL; } -static inline struct page *follow_devmap_pud(struct vm_area_struct *vma, - unsigned long addr, pud_t *pud, int flags, struct dev_pagemap **pgmap) -{ - return NULL; -} - static inline bool thp_migration_supported(void) { return false; diff --git a/mm/gup.c b/mm/gup.c index 26b8cca24077..1e5d42211bb4 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -525,6 +525,70 @@ static struct page *no_page_table(struct vm_area_struct *vma, return NULL; } +#ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES +static struct page *follow_huge_pud(struct vm_area_struct *vma, + unsigned long addr, pud_t *pudp, + int flags, struct follow_page_context *ctx) +{ + struct mm_struct *mm = vma->vm_mm; + struct page *page; + pud_t pud = *pudp; + unsigned long pfn = pud_pfn(pud); + int ret; + + assert_spin_locked(pud_lockptr(mm, pudp)); + + if ((flags & FOLL_WRITE) && !pud_write(pud)) + return NULL; + + if (!pud_present(pud)) + return NULL; + + pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT; + + if (IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) && + pud_devmap(pud)) { + /* + * device mapped pages can only be returned if the caller + * will manage the page reference count. + * + * At least one of FOLL_GET | FOLL_PIN must be set, so + * assert that here: + */ + if (!(flags & (FOLL_GET | FOLL_PIN))) + return ERR_PTR(-EEXIST); + + if (flags & FOLL_TOUCH) + touch_pud(vma, addr, pudp, flags & FOLL_WRITE); + + ctx->pgmap = get_dev_pagemap(pfn, ctx->pgmap); + if (!ctx->pgmap) + return ERR_PTR(-EFAULT); + } + + page = pfn_to_page(pfn); + + if (!pud_devmap(pud) && !pud_write(pud) && + gup_must_unshare(vma, flags, page)) + return ERR_PTR(-EMLINK); + + ret = try_grab_page(page, flags); + if (ret) + page = ERR_PTR(ret); + else + ctx->page_mask = HPAGE_PUD_NR - 1; + + return page; +} +#else /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */ +static struct page *follow_huge_pud(struct vm_area_struct *vma, + unsigned long addr, pud_t *pudp, + int flags, struct follow_page_context *ctx) +{ + return NULL; +} +#endif /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */ + static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, pte_t *pte, unsigned int flags) { @@ -760,11 +824,11 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma, pudp = pud_offset(p4dp, address); pud = READ_ONCE(*pudp); - if (pud_none(pud)) + if (!pud_present(pud)) return no_page_table(vma, flags, address); - if (pud_devmap(pud)) { + if (pud_leaf(pud)) { ptl = pud_lock(mm, pudp); - page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap); + page = follow_huge_pud(vma, address, pudp, flags, ctx); spin_unlock(ptl); if (page) return page; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index bc6fa82d9815..2979198d7b71 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1377,8 +1377,8 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, } #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD -static void touch_pud(struct vm_area_struct *vma, unsigned long addr, - pud_t *pud, bool write) +void touch_pud(struct vm_area_struct *vma, unsigned long addr, + pud_t *pud, bool write) { pud_t _pud; @@ -1390,49 +1390,6 @@ static void touch_pud(struct vm_area_struct *vma, unsigned long addr, update_mmu_cache_pud(vma, addr, pud); } -struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr, - pud_t *pud, int flags, struct dev_pagemap **pgmap) -{ - unsigned long pfn = pud_pfn(*pud); - struct mm_struct *mm = vma->vm_mm; - struct page *page; - int ret; - - assert_spin_locked(pud_lockptr(mm, pud)); - - if (flags & FOLL_WRITE && !pud_write(*pud)) - return NULL; - - if (pud_present(*pud) && pud_devmap(*pud)) - /* pass */; - else - return NULL; - - if (flags & FOLL_TOUCH) - touch_pud(vma, addr, pud, flags & FOLL_WRITE); - - /* - * device mapped pages can only be returned if the - * caller will manage the page reference count. - * - * At least one of FOLL_GET | FOLL_PIN must be set, so assert that here: - */ - if (!(flags & (FOLL_GET | FOLL_PIN))) - return ERR_PTR(-EEXIST); - - pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT; - *pgmap = get_dev_pagemap(pfn, *pgmap); - if (!*pgmap) - return ERR_PTR(-EFAULT); - page = pfn_to_page(pfn); - - ret = try_grab_page(page, flags); - if (ret) - page = ERR_PTR(ret); - - return page; -} - int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm, pud_t *dst_pud, pud_t *src_pud, unsigned long addr, struct vm_area_struct *vma) diff --git a/mm/internal.h b/mm/internal.h index 6c8d3844b6a3..eee8c82740b5 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1111,6 +1111,8 @@ int __must_check try_grab_page(struct page *page, unsigned int flags); /* * mm/huge_memory.c */ +void touch_pud(struct vm_area_struct *vma, unsigned long addr, + pud_t *pud, bool write); struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd, unsigned int flags); From patchwork Wed Mar 27 15:23:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13606845 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A570AC47DD9 for ; Wed, 27 Mar 2024 15:26:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Hlrr/7y2E3KGFjcbWPmbTV8Oh9owiG7M9Ku707am7Vs=; b=VXCkiGM+oAQNnc lc24wBrjoOIDeW0/MU5+fZXGj1I3oDl7lGpn3QxX1xjuP5lhYf2j+Se90iA2WNehNsmbmHUOrzZUh RYR3Xqy/E2tzWUewrjaC5FqWKYSURGdqCzIXgd24YkNJbLHBT+J/Iedg3a4zm/NYoTKCbADU2puDL Ad8uhNVPfevbPT1pOUY6UQtH9vfqgwdvKsIHqZRxF6vIOLfKxotGrrU3qH012jh7MsGzbhH6vHd7m LHcZ/JSZCmrRNadh7Dp7T8JlGQFFQmiFr3FLdFgFEdNgAa4cOZ4a9yJls71Ka0WnYjXXgMaxl35am TQ79A3Vmp4TMovVo3sXA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpVAm-00000009myo-3jjn; Wed, 27 Mar 2024 15:26:40 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV8O-00000009leB-2BbW for linux-riscv@bombadil.infradead.org; Wed, 27 Mar 2024 15:24:12 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Content-Transfer-Encoding :MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Sender:Reply-To:Content-ID:Content-Description; bh=h7p860Pwfa5oWvGOHiVson4UsvvLX6Zsz0nCOpl6KoI=; b=lHL1GMvPieU/8rlkR5tPg8XH86 533dwmjIaFUEHk+vrLYL64GdIiLkcQaNXqeX6CyE+mdUcMAJsNbLvJNm3ntiyYPH4csr28xZeBlaw WL34vkNMwDZuOOjUxJxFT+sZ4+NJYT2k5u4X0qzdrfuIcvWOlYhxa7ss/+BrkCs+cOAI/lmUDwW05 ffxbFLRpWo1La9RyWZl+1Z3OLtVeOoTdS/UTxUv8QEmZiVaP3z3BSjULeg2XAHI7lbqICoUYUs7sM Cg/Z072L6CrUgkdrOZW3YYd7xsgEZAKpnoxYID020+soROmVsRZQZpKNvL5nb+xPeE1fefM30VCGE gyda7kVg==; Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by desiato.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV8L-00000000L8E-0Jn1 for linux-riscv@lists.infradead.org; Wed, 27 Mar 2024 15:24:11 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553047; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=h7p860Pwfa5oWvGOHiVson4UsvvLX6Zsz0nCOpl6KoI=; b=AwRHYzXAFVVhJIyKx7e1rpYhJcQzLtUdWi0cBtR67eZytAPijgJ43/vC8wWrZpiDC2MOaP nEz1C0rafEr6EvmFtM0URLvhy30kOzZbn8kXfNpHd0eqhMaHyTgjjhjfOc54pMC6a/SSMK Uq/ZX4GbQ0hSEIRduXcklE2MiOFbRzw= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553048; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=h7p860Pwfa5oWvGOHiVson4UsvvLX6Zsz0nCOpl6KoI=; b=iwx+waqASA32X6VaplNr7ygjNu99VannKpjCCOO5ryoR1RoRbGtFu6kPLGA5xZj6r8Dj8g S5mZvFHXrJ70p4QowEhkZ8m4iK3H3tr788ynXyhOUU1dZmd2R1/ZVSM9Ok7eZc1DBEDgKN 05d7wllwEJ0Pa9g9Yh7XnrxbFrJJsmI= Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com [209.85.219.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-452-iMjVoq_eMI-DZ3JRCbL-PQ-1; Wed, 27 Mar 2024 11:24:01 -0400 X-MC-Unique: iMjVoq_eMI-DZ3JRCbL-PQ-1 Received: by mail-qv1-f71.google.com with SMTP id 6a1803df08f44-696a9c482a4so3596536d6.0 for ; Wed, 27 Mar 2024 08:24:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711553040; x=1712157840; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=h7p860Pwfa5oWvGOHiVson4UsvvLX6Zsz0nCOpl6KoI=; b=X+4WcIWV4nvykOORReih3d7jB6/GdCBxphlQZ1Yycz3bq84S4EOGP57rLkWayCnr1Q klbg/paeks16zMRA7WepYUChH49AWWlQZS97PLP6Jw/4XARYNXR40z9cgs9T4nezEksW +Otshtl+gTWxJSRnJOFBpUzG7Txep+f6lKfvvO73UnnLVBGTqwC+l0Q+dgb9gC647Ogr izBc9fkRXPxL32SafFR0CVKFw1eg79dK2eJZo77qnexkqvNLpDGUMlC05Ry1B4sFDTZP HU4hwV9LK23bQVFreJ85S9fTogLXtthNBYJsSVN2V3jZBHC4WlgEWSSD4B41hWCvARP0 fK0g== X-Forwarded-Encrypted: i=1; AJvYcCXfXOIaVNyhWRHxn58+f87wDTQgzm/27X2iXynawR+KOC/XahUOf3SWOoLdyG+NLf1C90kRLDLuDiHy4qJXPXLPmSYBNmgl+XPNOUmcvVCU X-Gm-Message-State: AOJu0YwMBbvhq4MH0AmIWYy+yOgVjSkhMbHlRKGjQ9vdMbJjN4qx+84I 14p8VRRr5oTvlvF+zvESyWCJffjNAeO+KpUZ3F8Cf2Do12010xOWhDDni9+6Saj6tq8rPfbaZ+v Rbk/EXmbJqqfVLqe8p2JCeOOS5AEEnVeGfKCFBDpTBu7CCqLiBr+ZgAQZcnnkAa/SYQ== X-Received: by 2002:a05:6214:3d13:b0:696:7b12:3744 with SMTP id ol19-20020a0562143d1300b006967b123744mr13941588qvb.0.1711553040352; Wed, 27 Mar 2024 08:24:00 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGf1gHn7pWGOliI2n3iseJ1Y4KDSmrGTREwaUpEtNei2O8hiTLwONxqZARC/3SfeDcLLHxfTA== X-Received: by 2002:a05:6214:3d13:b0:696:7b12:3744 with SMTP id ol19-20020a0562143d1300b006967b123744mr13941496qvb.0.1711553038757; Wed, 27 Mar 2024 08:23:58 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id hu4-20020a056214234400b00690dd47a41csm6412639qvb.86.2024.03.27.08.23.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Mar 2024 08:23:58 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Yang Shi , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Michael Ellerman , peterx@redhat.com, Andrew Jones , Muchun Song , linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Christophe Leroy , Andrew Morton , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox , Rik van Riel , linux-arm-kernel@lists.infradead.org, Andrea Arcangeli , David Hildenbrand , "Aneesh Kumar K . V" , Vlastimil Babka , James Houghton , Jason Gunthorpe , Mike Rapoport , Axel Rasmussen Subject: [PATCH v4 11/13] mm/gup: Handle huge pmd for follow_pmd_mask() Date: Wed, 27 Mar 2024 11:23:30 -0400 Message-ID: <20240327152332.950956-12-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240327152332.950956-1-peterx@redhat.com> References: <20240327152332.950956-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240327_152409_415412_0CF6355A X-CRM114-Status: GOOD ( 23.67 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Peter Xu Replace pmd_trans_huge() with pmd_leaf() to also cover pmd_huge() as long as enabled. FOLL_TOUCH and FOLL_SPLIT_PMD only apply to THP, not yet huge. Since now follow_trans_huge_pmd() can process hugetlb pages, renaming it into follow_huge_pmd() to match what it does. Move it into gup.c so not depend on CONFIG_THP. When at it, move the ctx->page_mask setup into follow_huge_pmd(), only set it when the page is valid. It was not a bug to set it before even if GUP failed (page==NULL), because follow_page_mask() callers always ignores page_mask if so. But doing so makes the code cleaner. Reviewed-by: Jason Gunthorpe Signed-off-by: Peter Xu --- mm/gup.c | 107 ++++++++++++++++++++++++++++++++++++++++++++--- mm/huge_memory.c | 86 +------------------------------------ mm/internal.h | 5 +-- 3 files changed, 105 insertions(+), 93 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 1e5d42211bb4..a81184b01276 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -580,6 +580,93 @@ static struct page *follow_huge_pud(struct vm_area_struct *vma, return page; } + +/* FOLL_FORCE can write to even unwritable PMDs in COW mappings. */ +static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page, + struct vm_area_struct *vma, + unsigned int flags) +{ + /* If the pmd is writable, we can write to the page. */ + if (pmd_write(pmd)) + return true; + + /* Maybe FOLL_FORCE is set to override it? */ + if (!(flags & FOLL_FORCE)) + return false; + + /* But FOLL_FORCE has no effect on shared mappings */ + if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED)) + return false; + + /* ... or read-only private ones */ + if (!(vma->vm_flags & VM_MAYWRITE)) + return false; + + /* ... or already writable ones that just need to take a write fault */ + if (vma->vm_flags & VM_WRITE) + return false; + + /* + * See can_change_pte_writable(): we broke COW and could map the page + * writable if we have an exclusive anonymous page ... + */ + if (!page || !PageAnon(page) || !PageAnonExclusive(page)) + return false; + + /* ... and a write-fault isn't required for other reasons. */ + if (vma_soft_dirty_enabled(vma) && !pmd_soft_dirty(pmd)) + return false; + return !userfaultfd_huge_pmd_wp(vma, pmd); +} + +static struct page *follow_huge_pmd(struct vm_area_struct *vma, + unsigned long addr, pmd_t *pmd, + unsigned int flags, + struct follow_page_context *ctx) +{ + struct mm_struct *mm = vma->vm_mm; + pmd_t pmdval = *pmd; + struct page *page; + int ret; + + assert_spin_locked(pmd_lockptr(mm, pmd)); + + page = pmd_page(pmdval); + VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page); + + if ((flags & FOLL_WRITE) && + !can_follow_write_pmd(pmdval, page, vma, flags)) + return NULL; + + /* Avoid dumping huge zero page */ + if ((flags & FOLL_DUMP) && is_huge_zero_pmd(pmdval)) + return ERR_PTR(-EFAULT); + + if (pmd_protnone(*pmd) && !gup_can_follow_protnone(vma, flags)) + return NULL; + + if (!pmd_write(pmdval) && gup_must_unshare(vma, flags, page)) + return ERR_PTR(-EMLINK); + + VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) && + !PageAnonExclusive(page), page); + + ret = try_grab_page(page, flags); + if (ret) + return ERR_PTR(ret); + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + if (pmd_trans_huge(pmdval) && (flags & FOLL_TOUCH)) + touch_pmd(vma, addr, pmd, flags & FOLL_WRITE); +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ + + page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT; + ctx->page_mask = HPAGE_PMD_NR - 1; + VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), page); + + return page; +} + #else /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */ static struct page *follow_huge_pud(struct vm_area_struct *vma, unsigned long addr, pud_t *pudp, @@ -587,6 +674,14 @@ static struct page *follow_huge_pud(struct vm_area_struct *vma, { return NULL; } + +static struct page *follow_huge_pmd(struct vm_area_struct *vma, + unsigned long addr, pmd_t *pmd, + unsigned int flags, + struct follow_page_context *ctx) +{ + return NULL; +} #endif /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, @@ -784,31 +879,31 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, return page; return no_page_table(vma, flags, address); } - if (likely(!pmd_trans_huge(pmdval))) + if (likely(!pmd_leaf(pmdval))) return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); if (pmd_protnone(pmdval) && !gup_can_follow_protnone(vma, flags)) return no_page_table(vma, flags, address); ptl = pmd_lock(mm, pmd); - if (unlikely(!pmd_present(*pmd))) { + pmdval = *pmd; + if (unlikely(!pmd_present(pmdval))) { spin_unlock(ptl); return no_page_table(vma, flags, address); } - if (unlikely(!pmd_trans_huge(*pmd))) { + if (unlikely(!pmd_leaf(pmdval))) { spin_unlock(ptl); return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); } - if (flags & FOLL_SPLIT_PMD) { + if (pmd_trans_huge(pmdval) && (flags & FOLL_SPLIT_PMD)) { spin_unlock(ptl); split_huge_pmd(vma, pmd, address); /* If pmd was left empty, stuff a page table in there quickly */ return pte_alloc(mm, pmd) ? ERR_PTR(-ENOMEM) : follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); } - page = follow_trans_huge_pmd(vma, address, pmd, flags); + page = follow_huge_pmd(vma, address, pmd, flags, ctx); spin_unlock(ptl); - ctx->page_mask = HPAGE_PMD_NR - 1; return page; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2979198d7b71..ed0d82c4b829 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1220,8 +1220,8 @@ vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write) EXPORT_SYMBOL_GPL(vmf_insert_pfn_pud); #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ -static void touch_pmd(struct vm_area_struct *vma, unsigned long addr, - pmd_t *pmd, bool write) +void touch_pmd(struct vm_area_struct *vma, unsigned long addr, + pmd_t *pmd, bool write) { pmd_t _pmd; @@ -1576,88 +1576,6 @@ static inline bool can_change_pmd_writable(struct vm_area_struct *vma, return pmd_dirty(pmd); } -/* FOLL_FORCE can write to even unwritable PMDs in COW mappings. */ -static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page, - struct vm_area_struct *vma, - unsigned int flags) -{ - /* If the pmd is writable, we can write to the page. */ - if (pmd_write(pmd)) - return true; - - /* Maybe FOLL_FORCE is set to override it? */ - if (!(flags & FOLL_FORCE)) - return false; - - /* But FOLL_FORCE has no effect on shared mappings */ - if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED)) - return false; - - /* ... or read-only private ones */ - if (!(vma->vm_flags & VM_MAYWRITE)) - return false; - - /* ... or already writable ones that just need to take a write fault */ - if (vma->vm_flags & VM_WRITE) - return false; - - /* - * See can_change_pte_writable(): we broke COW and could map the page - * writable if we have an exclusive anonymous page ... - */ - if (!page || !PageAnon(page) || !PageAnonExclusive(page)) - return false; - - /* ... and a write-fault isn't required for other reasons. */ - if (vma_soft_dirty_enabled(vma) && !pmd_soft_dirty(pmd)) - return false; - return !userfaultfd_huge_pmd_wp(vma, pmd); -} - -struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, - unsigned long addr, - pmd_t *pmd, - unsigned int flags) -{ - struct mm_struct *mm = vma->vm_mm; - struct page *page; - int ret; - - assert_spin_locked(pmd_lockptr(mm, pmd)); - - page = pmd_page(*pmd); - VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page); - - if ((flags & FOLL_WRITE) && - !can_follow_write_pmd(*pmd, page, vma, flags)) - return NULL; - - /* Avoid dumping huge zero page */ - if ((flags & FOLL_DUMP) && is_huge_zero_pmd(*pmd)) - return ERR_PTR(-EFAULT); - - if (pmd_protnone(*pmd) && !gup_can_follow_protnone(vma, flags)) - return NULL; - - if (!pmd_write(*pmd) && gup_must_unshare(vma, flags, page)) - return ERR_PTR(-EMLINK); - - VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) && - !PageAnonExclusive(page), page); - - ret = try_grab_page(page, flags); - if (ret) - return ERR_PTR(ret); - - if (flags & FOLL_TOUCH) - touch_pmd(vma, addr, pmd, flags & FOLL_WRITE); - - page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT; - VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), page); - - return page; -} - /* NUMA hinting page fault entry point for trans huge pmds */ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) { diff --git a/mm/internal.h b/mm/internal.h index eee8c82740b5..e10ecc6594f1 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1113,9 +1113,8 @@ int __must_check try_grab_page(struct page *page, unsigned int flags); */ void touch_pud(struct vm_area_struct *vma, unsigned long addr, pud_t *pud, bool write); -struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, - unsigned long addr, pmd_t *pmd, - unsigned int flags); +void touch_pmd(struct vm_area_struct *vma, unsigned long addr, + pmd_t *pmd, bool write); /* * mm/mmap.c From patchwork Wed Mar 27 15:23:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13606847 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AF871C54E67 for ; Wed, 27 Mar 2024 15:26:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=1vMYULAKSsKCqtQosohk1Z4Ul4j1ZZhnYL5/K0o2W2U=; b=3pWdH2TiDGlHQz vidtY56KyzYR8Aq+Hvl+0GKlZlafRiPR/KiCptrVvLQOPe2S5ZE9DlWE4nyeGiZpanZPhyclCeZpv j8355k1w4yIPZJQqKzX8ewxLfA0Yf5W9BCjuW41077rJe6Xjy+3mL7ENSI5kSOnQbY5AXxNkWD9iC YynX9uOSKpRUPl/Lxn2Cz0PjN9NgeQZrT5ZGhkKfGVy4OjkSFzWLraPX1F7VSwdV21XQCB9UEb5jQ SZAiwjV5sUreQ1z2gASldXijmf47ED6OyQ4XBNNdm++8fYDjf01M0efFGld20/gC551SLAJ0Te5A1 L9YJHBhuwUTkGY3mduJQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpVAz-00000009n9Y-3Feo; Wed, 27 Mar 2024 15:26:53 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV8I-00000009lam-34kQ for linux-riscv@lists.infradead.org; Wed, 27 Mar 2024 15:24:19 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553045; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cjVJdBQT9/0ffkgcc4+ae3LmOxlOj4cpHe90WMWbkEY=; b=GE8wOwmEb6z2Z15Lhyt1LyIF99CPwEkey36j6jdfk8WaWWS9XOMRsRM85HDd/jDQw44lsa 2lDaU2Ha/xcD4wCXUUicCRvD7bSg8lPDPb7picIuG3AHOeKSItD+BfRCfb6QiQffczoPwk n81xtBDhtlUVx8rMcHnOuZUPnzqe50c= Received: from mail-oi1-f197.google.com (mail-oi1-f197.google.com [209.85.167.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-650-IHzVnvxoNw6KHgzDpbPWFA-1; Wed, 27 Mar 2024 11:24:02 -0400 X-MC-Unique: IHzVnvxoNw6KHgzDpbPWFA-1 Received: by mail-oi1-f197.google.com with SMTP id 5614622812f47-3c3c43470d7so1243370b6e.0 for ; Wed, 27 Mar 2024 08:24:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711553041; x=1712157841; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cjVJdBQT9/0ffkgcc4+ae3LmOxlOj4cpHe90WMWbkEY=; b=kJB386Mq9/sxAe++POWw28DSPaXOy+TN/fQtdtzfJLAHECUxN31nXA8b622/Ny90WH OhxWZFxtB823uSovDs4bbw6Op/RpGeyVKOsIAhQlRwIhMDhpIhmz4lkK6qGgxfLV1mXa Boe0Gza/uXAal/JXWlqQ4f1adcyIwpy0r9/fQRZZgUnelHFG1QLVpqE1lpAaEHGrK7KS 0dSRkSwye1ZdZGOe7w2LNhG9i6QnGqa3hRkV4y5YmwfQGHgnUP0j8LJvIgyFvsmW9CWZ MO3VOfDfxbQHyNQs8I1C4DCYhdoAIK+yfuGyD1F0ffTBgHN8dT1BuQJujJQ8O1owL/sW kX3A== X-Forwarded-Encrypted: i=1; AJvYcCVaaN41j272M0ywPOjbF8e7JFH7hNK52Sd3987KFoP4C8UjAcU1rOC3KkEMBsJFKm35Oqtf4ygqRo8UIAqUvu53CeYQtXnUWr70vTCaQXiM X-Gm-Message-State: AOJu0YzGz89v+D+W5QRSDS7KBwdJTaik6KvpYu2OGXGn60Ra2LyYgXgR J6S+orLwH5Rk49hTIHOevQjB9rfiJrwExoMCbsf0gY+IjmfbIekBYR3EQrWbY73V4vNgPmgAMBg 5jOsEESsnZY9KSjVMLKqVtAw5lHd96hJk3KE2b1U4nEC8iAvUSqIhdjqMofK8ZYwikw== X-Received: by 2002:a05:6808:1829:b0:3c3:c913:2709 with SMTP id bh41-20020a056808182900b003c3c9132709mr293788oib.2.1711553041220; Wed, 27 Mar 2024 08:24:01 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHAanQflKq3pmTo1+xDGdmPcZlLuONtp7wiA2xhvVSTNiy0v1ZL/DhUjtngy2OWwjyBQX1+lw== X-Received: by 2002:a05:6808:1829:b0:3c3:c913:2709 with SMTP id bh41-20020a056808182900b003c3c9132709mr293761oib.2.1711553040651; Wed, 27 Mar 2024 08:24:00 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id hu4-20020a056214234400b00690dd47a41csm6412639qvb.86.2024.03.27.08.23.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Mar 2024 08:24:00 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Yang Shi , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Michael Ellerman , peterx@redhat.com, Andrew Jones , Muchun Song , linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Christophe Leroy , Andrew Morton , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox , Rik van Riel , linux-arm-kernel@lists.infradead.org, Andrea Arcangeli , David Hildenbrand , "Aneesh Kumar K . V" , Vlastimil Babka , James Houghton , Jason Gunthorpe , Mike Rapoport , Axel Rasmussen Subject: [PATCH v4 12/13] mm/gup: Handle hugepd for follow_page() Date: Wed, 27 Mar 2024 11:23:31 -0400 Message-ID: <20240327152332.950956-13-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240327152332.950956-1-peterx@redhat.com> References: <20240327152332.950956-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240327_082407_392832_8AC7C433 X-CRM114-Status: GOOD ( 25.82 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Peter Xu Hugepd is only used in PowerPC so far on 4K page size kernels where hash mmu is used. follow_page_mask() used to leverage hugetlb APIs to access hugepd entries. Teach follow_page_mask() itself on hugepd. With previous refactors on fast-gup gup_huge_pd(), most of the code can be leveraged. There's something not needed for follow page, for example, gup_hugepte() tries to detect pgtable entry change which will never happen with slow gup (which has the pgtable lock held), but that's not a problem to check. Since follow_page() always only fetch one page, set the end to "address + PAGE_SIZE" should suffice. We will still do the pgtable walk once for each hugetlb page by setting ctx->page_mask properly. One thing worth mentioning is that some level of pgtable's _bad() helper will report is_hugepd() entries as TRUE on Power8 hash MMUs. I think it at least applies to PUD on Power8 with 4K pgsize. It means feeding a hugepd entry to pud_bad() will report a false positive. Let's leave that for now because it can be arch-specific where I am a bit declined to touch. In this patch it's not a problem as long as hugepd is detected before any bad pgtable entries. To allow slow gup like follow_*_page() to access hugepd helpers, hugepd codes are moved to the top. Besides that, the helper record_subpages() will be used by either hugepd or fast-gup now. To avoid "unused function" warnings we must provide a "#ifdef" for it, unfortunately. Signed-off-by: Peter Xu --- mm/gup.c | 269 +++++++++++++++++++++++++++++++++---------------------- 1 file changed, 163 insertions(+), 106 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index a81184b01276..a02463c9420e 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -500,6 +500,149 @@ static inline void mm_set_has_pinned_flag(unsigned long *mm_flags) } #ifdef CONFIG_MMU + +#if defined(CONFIG_ARCH_HAS_HUGEPD) || defined(CONFIG_HAVE_FAST_GUP) +static int record_subpages(struct page *page, unsigned long sz, + unsigned long addr, unsigned long end, + struct page **pages) +{ + struct page *start_page; + int nr; + + start_page = nth_page(page, (addr & (sz - 1)) >> PAGE_SHIFT); + for (nr = 0; addr != end; nr++, addr += PAGE_SIZE) + pages[nr] = nth_page(start_page, nr); + + return nr; +} +#endif /* CONFIG_ARCH_HAS_HUGEPD || CONFIG_HAVE_FAST_GUP */ + +#ifdef CONFIG_ARCH_HAS_HUGEPD +static unsigned long hugepte_addr_end(unsigned long addr, unsigned long end, + unsigned long sz) +{ + unsigned long __boundary = (addr + sz) & ~(sz-1); + return (__boundary - 1 < end - 1) ? __boundary : end; +} + +static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, + unsigned long end, unsigned int flags, + struct page **pages, int *nr) +{ + unsigned long pte_end; + struct page *page; + struct folio *folio; + pte_t pte; + int refs; + + pte_end = (addr + sz) & ~(sz-1); + if (pte_end < end) + end = pte_end; + + pte = huge_ptep_get(ptep); + + if (!pte_access_permitted(pte, flags & FOLL_WRITE)) + return 0; + + /* hugepages are never "special" */ + VM_BUG_ON(!pfn_valid(pte_pfn(pte))); + + page = pte_page(pte); + refs = record_subpages(page, sz, addr, end, pages + *nr); + + folio = try_grab_folio(page, refs, flags); + if (!folio) + return 0; + + if (unlikely(pte_val(pte) != pte_val(ptep_get(ptep)))) { + gup_put_folio(folio, refs, flags); + return 0; + } + + if (!pte_write(pte) && gup_must_unshare(NULL, flags, &folio->page)) { + gup_put_folio(folio, refs, flags); + return 0; + } + + *nr += refs; + folio_set_referenced(folio); + return 1; +} + +/* + * NOTE: currently GUP for a hugepd is only possible on hugetlbfs file + * systems on Power, which does not have issue with folio writeback against + * GUP updates. When hugepd will be extended to support non-hugetlbfs or + * even anonymous memory, we need to do extra check as what we do with most + * of the other folios. See writable_file_mapping_allowed() and + * folio_fast_pin_allowed() for more information. + */ +static int gup_huge_pd(hugepd_t hugepd, unsigned long addr, + unsigned int pdshift, unsigned long end, unsigned int flags, + struct page **pages, int *nr) +{ + pte_t *ptep; + unsigned long sz = 1UL << hugepd_shift(hugepd); + unsigned long next; + + ptep = hugepte_offset(hugepd, addr, pdshift); + do { + next = hugepte_addr_end(addr, end, sz); + if (!gup_hugepte(ptep, sz, addr, end, flags, pages, nr)) + return 0; + } while (ptep++, addr = next, addr != end); + + return 1; +} + +static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd, + unsigned long addr, unsigned int pdshift, + unsigned int flags, + struct follow_page_context *ctx) +{ + struct page *page; + struct hstate *h; + spinlock_t *ptl; + int nr = 0, ret; + pte_t *ptep; + + /* Only hugetlb supports hugepd */ + if (WARN_ON_ONCE(!is_vm_hugetlb_page(vma))) + return ERR_PTR(-EFAULT); + + h = hstate_vma(vma); + ptep = hugepte_offset(hugepd, addr, pdshift); + ptl = huge_pte_lock(h, vma->vm_mm, ptep); + ret = gup_huge_pd(hugepd, addr, pdshift, addr + PAGE_SIZE, + flags, &page, &nr); + spin_unlock(ptl); + + if (ret) { + WARN_ON_ONCE(nr != 1); + ctx->page_mask = (1U << huge_page_order(h)) - 1; + return page; + } + + return NULL; +} +#else /* CONFIG_ARCH_HAS_HUGEPD */ +static inline int gup_huge_pd(hugepd_t hugepd, unsigned long addr, + unsigned int pdshift, unsigned long end, unsigned int flags, + struct page **pages, int *nr) +{ + return 0; +} + +static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd, + unsigned long addr, unsigned int pdshift, + unsigned int flags, + struct follow_page_context *ctx) +{ + return NULL; +} +#endif /* CONFIG_ARCH_HAS_HUGEPD */ + + static struct page *no_page_table(struct vm_area_struct *vma, unsigned int flags, unsigned long address) { @@ -871,6 +1014,9 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, return no_page_table(vma, flags, address); if (!pmd_present(pmdval)) return no_page_table(vma, flags, address); + if (unlikely(is_hugepd(__hugepd(pmd_val(pmdval))))) + return follow_hugepd(vma, __hugepd(pmd_val(pmdval)), + address, PMD_SHIFT, flags, ctx); if (pmd_devmap(pmdval)) { ptl = pmd_lock(mm, pmd); page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap); @@ -921,6 +1067,9 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma, pud = READ_ONCE(*pudp); if (!pud_present(pud)) return no_page_table(vma, flags, address); + if (unlikely(is_hugepd(__hugepd(pud_val(pud))))) + return follow_hugepd(vma, __hugepd(pud_val(pud)), + address, PUD_SHIFT, flags, ctx); if (pud_leaf(pud)) { ptl = pud_lock(mm, pudp); page = follow_huge_pud(vma, address, pudp, flags, ctx); @@ -944,10 +1093,13 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma, p4dp = p4d_offset(pgdp, address); p4d = READ_ONCE(*p4dp); - if (!p4d_present(p4d)) - return no_page_table(vma, flags, address); BUILD_BUG_ON(p4d_leaf(p4d)); - if (unlikely(p4d_bad(p4d))) + + if (unlikely(is_hugepd(__hugepd(p4d_val(p4d))))) + return follow_hugepd(vma, __hugepd(p4d_val(p4d)), + address, P4D_SHIFT, flags, ctx); + + if (!p4d_present(p4d) || p4d_bad(p4d)) return no_page_table(vma, flags, address); return follow_pud_mask(vma, address, p4dp, flags, ctx); @@ -997,10 +1149,15 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, pgd = pgd_offset(mm, address); - if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) - return no_page_table(vma, flags, address); + if (unlikely(is_hugepd(__hugepd(pgd_val(*pgd))))) + page = follow_hugepd(vma, __hugepd(pgd_val(*pgd)), + address, PGDIR_SHIFT, flags, ctx); + else if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) + page = no_page_table(vma, flags, address); + else + page = follow_p4d_mask(vma, address, pgd, flags, ctx); - return follow_p4d_mask(vma, address, pgd, flags, ctx); + return page; } struct page *follow_page(struct vm_area_struct *vma, unsigned long address, @@ -2947,106 +3104,6 @@ static int __gup_device_huge_pud(pud_t pud, pud_t *pudp, unsigned long addr, } #endif -static int record_subpages(struct page *page, unsigned long sz, - unsigned long addr, unsigned long end, - struct page **pages) -{ - struct page *start_page; - int nr; - - start_page = nth_page(page, (addr & (sz - 1)) >> PAGE_SHIFT); - for (nr = 0; addr != end; nr++, addr += PAGE_SIZE) - pages[nr] = nth_page(start_page, nr); - - return nr; -} - -#ifdef CONFIG_ARCH_HAS_HUGEPD -static unsigned long hugepte_addr_end(unsigned long addr, unsigned long end, - unsigned long sz) -{ - unsigned long __boundary = (addr + sz) & ~(sz-1); - return (__boundary - 1 < end - 1) ? __boundary : end; -} - -static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, - unsigned long end, unsigned int flags, - struct page **pages, int *nr) -{ - unsigned long pte_end; - struct page *page; - struct folio *folio; - pte_t pte; - int refs; - - pte_end = (addr + sz) & ~(sz-1); - if (pte_end < end) - end = pte_end; - - pte = huge_ptep_get(ptep); - - if (!pte_access_permitted(pte, flags & FOLL_WRITE)) - return 0; - - /* hugepages are never "special" */ - VM_BUG_ON(!pfn_valid(pte_pfn(pte))); - - page = pte_page(pte); - refs = record_subpages(page, sz, addr, end, pages + *nr); - - folio = try_grab_folio(page, refs, flags); - if (!folio) - return 0; - - if (unlikely(pte_val(pte) != pte_val(ptep_get(ptep)))) { - gup_put_folio(folio, refs, flags); - return 0; - } - - if (!pte_write(pte) && gup_must_unshare(NULL, flags, &folio->page)) { - gup_put_folio(folio, refs, flags); - return 0; - } - - *nr += refs; - folio_set_referenced(folio); - return 1; -} - -/* - * NOTE: currently GUP for a hugepd is only possible on hugetlbfs file - * systems on Power, which does not have issue with folio writeback against - * GUP updates. When hugepd will be extended to support non-hugetlbfs or - * even anonymous memory, we need to do extra check as what we do with most - * of the other folios. See writable_file_mapping_allowed() and - * folio_fast_pin_allowed() for more information. - */ -static int gup_huge_pd(hugepd_t hugepd, unsigned long addr, - unsigned int pdshift, unsigned long end, unsigned int flags, - struct page **pages, int *nr) -{ - pte_t *ptep; - unsigned long sz = 1UL << hugepd_shift(hugepd); - unsigned long next; - - ptep = hugepte_offset(hugepd, addr, pdshift); - do { - next = hugepte_addr_end(addr, end, sz); - if (!gup_hugepte(ptep, sz, addr, end, flags, pages, nr)) - return 0; - } while (ptep++, addr = next, addr != end); - - return 1; -} -#else -static inline int gup_huge_pd(hugepd_t hugepd, unsigned long addr, - unsigned int pdshift, unsigned long end, unsigned int flags, - struct page **pages, int *nr) -{ - return 0; -} -#endif /* CONFIG_ARCH_HAS_HUGEPD */ - static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, unsigned long end, unsigned int flags, struct page **pages, int *nr) From patchwork Wed Mar 27 15:23:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13606846 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B4AF7C47DD9 for ; Wed, 27 Mar 2024 15:26:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=RZ5JsVqtviKD0LHRfODZaLLnPLzV9KAiT0/SCtQpyFA=; b=iLfV6+t6nU5942 rzGsybCzwO0bJUChv2lkM3U5UQqCVNxDeBP5w34W3E9woG9lTMZ8+pWaskdam2uPsiSIJzQA37WqT YgTsttB/UPI94cEZV4xybIlXNZmNeG2qyZzz55yORWR8yD9q7OEpo+P+18GpzGaqeyTcmh2J0yrhL 6fVxZ8o+gn2iwFTxVsTQNiroka45PV31EeStEDdyakXLVpazEjcV0YozSCnlZKpzqCUv2sakL0xh/ X77ABeWSIHvqwVnQOhx+CFyYSeB7arb3WPXhEGj+jzDw0hEUuSHB+L0ujv4de+WydZoMwCzjgsaq6 AuODcr9nP6S2loZIiiNQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpVAs-00000009n49-1LNL; Wed, 27 Mar 2024 15:26:46 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV8P-00000009lea-22ey for linux-riscv@bombadil.infradead.org; Wed, 27 Mar 2024 15:24:13 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Content-Transfer-Encoding :MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Sender:Reply-To:Content-ID:Content-Description; bh=eqT6FBXOR6azzagmtpaiz+BS8+Dy8n2ikly9Wvyy/LU=; b=adrM10GxlP7eZvmXAd/KIx5V2X CHXkFM5M2IFHcaODE/mtfUE5KrSaIfou4YG5QPqpHYVoZ6BAPjAgYndPKWf+L4qhLEiuBmreLl/W2 r+Kc79bLMwS5wCaujGu0tDBgekTGSpb86ms00MgXSXH3WrIXVM3F65yZQDgo3yudN6OghekFRx+sx e5CLgfVv50T9Ci2pPoan5OEaMpbS9oofMbI6KRqITYRBrA5aHXgv92QmtNgE73Y6h0xQjMb5Jjhli uff78FQUbYgnTu7yyr5R+cCE8YiDcq23262AsXvRydsgkUAQP2qk8xyY0BhSt/nDYh3WYqnC99zZs w+kmpKag==; Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by desiato.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpV8M-00000000L9D-0l32 for linux-riscv@lists.infradead.org; Wed, 27 Mar 2024 15:24:12 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553047; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eqT6FBXOR6azzagmtpaiz+BS8+Dy8n2ikly9Wvyy/LU=; b=RsLSW90As0nEILGr2wKa+3XD95vBW+Z7kfZAD9fUfUl90vSAlWfJbsdPW94qOSRFVtO8HO 1UO1yTxihBsQW2Z/FleXzc+jCZ4YGqveaevgPA5DVXFgTTkH5GyQhMEXFjX7w3ScTTLPOl /fbQEhFZiEbVfwg5yyhtZUiwzNhqlfU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711553049; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eqT6FBXOR6azzagmtpaiz+BS8+Dy8n2ikly9Wvyy/LU=; b=YVWCjPvMoiZDvUCPbINAdB7H2Bjd6l7zNgRiy/STea0848DAJOdPSwJMAonICQhB8Lm6el oj74nYEEMToERHq50TEkildKCWcMsATcnehWoxCaNDAaz6XINsZ3GvFNCeENDimDNYOiNY HAzBPCN3RdvaP6YhEAOe2L5nd45cs0U= Received: from mail-ot1-f69.google.com (mail-ot1-f69.google.com [209.85.210.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-325-bZZr7CrbMqeCnpAUqJvj3w-1; Wed, 27 Mar 2024 11:24:06 -0400 X-MC-Unique: bZZr7CrbMqeCnpAUqJvj3w-1 Received: by mail-ot1-f69.google.com with SMTP id 46e09a7af769-6dde25ac92fso3181987a34.0 for ; Wed, 27 Mar 2024 08:24:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711553043; x=1712157843; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eqT6FBXOR6azzagmtpaiz+BS8+Dy8n2ikly9Wvyy/LU=; b=PyfxOzg/Q3B664W7rRz0QhUpXr2mDkly3dahntsiVrDJmLW+NdK7krsl524deJLVQY w3jJxiOPkLjclKoZQIensRmlymFjpNatWvO8n1nsIzsoIniXKnlwTsBYqaWhjPAfAt21 rMwqt7dDfxNUPuhLbSoegmeEuYELIFMt745xoNJs5cL4IJ4CBeVLXJu5aVN4Sz+uOiU9 4HXwLisjDQzNDoFTnv15vDw/bPIrzu88+12N5H8yXgKPjxl0IdxbhQW7Zw6QgJd9ng9j SiP07kefks1XRORPI7VdVeEspoDfpkyP7v9+1MgPPiuHLBxXbw0hr/e47z7dMtMl6sl4 52Ig== X-Forwarded-Encrypted: i=1; AJvYcCU0jsZVyPbyCPQPVUrKi79A8JjEPN4y08fN4PJ8VIbkTVvIBhGeZDrWL4oSb+Bcj3BjAT4Bblj2u0+Oxhfsjr3ak1IhlGHY9mmFLi7TXBwO X-Gm-Message-State: AOJu0Yyxae7vloMqQQ2x9cCtdDFJGFLj0jgF6p3/cA7Usht/rUUY8Vwr 2KFXxebBwTDftlaOmNTzDBoumGd01jwFYtC9gSu+XIU30kWPL7SLomnEACAqigGyuF3RsLDlyW7 H1tuQ0qD9VnpcZZZQKtGBW//P6oa0P1iiYgs+reCL3cHeiw4uRcFPkvWIbiXy1rzF3Q== X-Received: by 2002:a05:6808:1381:b0:3c3:d815:b670 with SMTP id c1-20020a056808138100b003c3d815b670mr278645oiw.2.1711553043435; Wed, 27 Mar 2024 08:24:03 -0700 (PDT) X-Google-Smtp-Source: AGHT+IF2f7pSTtNK/4lDxloQcNJtvLkvUR42Ph4fHUFnW1nCtEkqTm3cUEoT3r9uMsOerfq/WsXhuA== X-Received: by 2002:a05:6808:1381:b0:3c3:d815:b670 with SMTP id c1-20020a056808138100b003c3d815b670mr278609oiw.2.1711553042809; Wed, 27 Mar 2024 08:24:02 -0700 (PDT) Received: from x1n.redhat.com ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id hu4-20020a056214234400b00690dd47a41csm6412639qvb.86.2024.03.27.08.24.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Mar 2024 08:24:02 -0700 (PDT) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Yang Shi , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Michael Ellerman , peterx@redhat.com, Andrew Jones , Muchun Song , linux-riscv@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, Christophe Leroy , Andrew Morton , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox , Rik van Riel , linux-arm-kernel@lists.infradead.org, Andrea Arcangeli , David Hildenbrand , "Aneesh Kumar K . V" , Vlastimil Babka , James Houghton , Jason Gunthorpe , Mike Rapoport , Axel Rasmussen Subject: [PATCH v4 13/13] mm/gup: Handle hugetlb in the generic follow_page_mask code Date: Wed, 27 Mar 2024 11:23:32 -0400 Message-ID: <20240327152332.950956-14-peterx@redhat.com> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240327152332.950956-1-peterx@redhat.com> References: <20240327152332.950956-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240327_152410_615444_EC207309 X-CRM114-Status: GOOD ( 22.86 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Peter Xu Now follow_page() is ready to handle hugetlb pages in whatever form, and over all architectures. Switch to the generic code path. Time to retire hugetlb_follow_page_mask(), following the previous retirement of follow_hugetlb_page() in 4849807114b8. There may be a slight difference of how the loops run when processing slow GUP over a large hugetlb range on cont_pte/cont_pmd supported archs: each loop of __get_user_pages() will resolve one pgtable entry with the patch applied, rather than relying on the size of hugetlb hstate, the latter may cover multiple entries in one loop. A quick performance test on an aarch64 VM on M1 chip shows 15% degrade over a tight loop of slow gup after the path switched. That shouldn't be a problem because slow-gup should not be a hot path for GUP in general: when page is commonly present, fast-gup will already succeed, while when the page is indeed missing and require a follow up page fault, the slow gup degrade will probably buried in the fault paths anyway. It also explains why slow gup for THP used to be very slow before 57edfcfd3419 ("mm/gup: accelerate thp gup even for "pages != NULL"") lands, the latter not part of a performance analysis but a side benefit. If the performance will be a concern, we can consider handle CONT_PTE in follow_page(). Before that is justified to be necessary, keep everything clean and simple. Reviewed-by: Jason Gunthorpe Signed-off-by: Peter Xu Signed-off-by: Peter Xu --- include/linux/hugetlb.h | 7 ---- mm/gup.c | 15 +++------ mm/hugetlb.c | 71 ----------------------------------------- 3 files changed, 5 insertions(+), 88 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 294c78b3549f..a546140f89cd 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -328,13 +328,6 @@ static inline void hugetlb_zap_end( { } -static inline struct page *hugetlb_follow_page_mask( - struct vm_area_struct *vma, unsigned long address, unsigned int flags, - unsigned int *page_mask) -{ - BUILD_BUG(); /* should never be compiled in if !CONFIG_HUGETLB_PAGE*/ -} - static inline int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, struct vm_area_struct *dst_vma, diff --git a/mm/gup.c b/mm/gup.c index a02463c9420e..c803d0b0f358 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1135,18 +1135,11 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, { pgd_t *pgd; struct mm_struct *mm = vma->vm_mm; + struct page *page; - ctx->page_mask = 0; - - /* - * Call hugetlb_follow_page_mask for hugetlb vmas as it will use - * special hugetlb page table walking code. This eliminates the - * need to check for hugetlb entries in the general walking code. - */ - if (is_vm_hugetlb_page(vma)) - return hugetlb_follow_page_mask(vma, address, flags, - &ctx->page_mask); + vma_pgtable_walk_begin(vma); + ctx->page_mask = 0; pgd = pgd_offset(mm, address); if (unlikely(is_hugepd(__hugepd(pgd_val(*pgd))))) @@ -1157,6 +1150,8 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, else page = follow_p4d_mask(vma, address, pgd, flags, ctx); + vma_pgtable_walk_end(vma); + return page; } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 65b9c9a48fd2..cc79891a3597 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6870,77 +6870,6 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, } #endif /* CONFIG_USERFAULTFD */ -struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, - unsigned long address, unsigned int flags, - unsigned int *page_mask) -{ - struct hstate *h = hstate_vma(vma); - struct mm_struct *mm = vma->vm_mm; - unsigned long haddr = address & huge_page_mask(h); - struct page *page = NULL; - spinlock_t *ptl; - pte_t *pte, entry; - int ret; - - hugetlb_vma_lock_read(vma); - pte = hugetlb_walk(vma, haddr, huge_page_size(h)); - if (!pte) - goto out_unlock; - - ptl = huge_pte_lock(h, mm, pte); - entry = huge_ptep_get(pte); - if (pte_present(entry)) { - page = pte_page(entry); - - if (!huge_pte_write(entry)) { - if (flags & FOLL_WRITE) { - page = NULL; - goto out; - } - - if (gup_must_unshare(vma, flags, page)) { - /* Tell the caller to do unsharing */ - page = ERR_PTR(-EMLINK); - goto out; - } - } - - page = nth_page(page, ((address & ~huge_page_mask(h)) >> PAGE_SHIFT)); - - /* - * Note that page may be a sub-page, and with vmemmap - * optimizations the page struct may be read only. - * try_grab_page() will increase the ref count on the - * head page, so this will be OK. - * - * try_grab_page() should always be able to get the page here, - * because we hold the ptl lock and have verified pte_present(). - */ - ret = try_grab_page(page, flags); - - if (WARN_ON_ONCE(ret)) { - page = ERR_PTR(ret); - goto out; - } - - *page_mask = (1U << huge_page_order(h)) - 1; - } -out: - spin_unlock(ptl); -out_unlock: - hugetlb_vma_unlock_read(vma); - - /* - * Fixup retval for dump requests: if pagecache doesn't exist, - * don't try to allocate a new page but just skip it. - */ - if (!page && (flags & FOLL_DUMP) && - !hugetlbfs_pagecache_present(h, vma, address)) - page = ERR_PTR(-EFAULT); - - return page; -} - long hugetlb_change_protection(struct vm_area_struct *vma, unsigned long address, unsigned long end, pgprot_t newprot, unsigned long cp_flags)