From patchwork Thu Feb 24 12:26:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 12758455 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0C2BC433EF for ; Thu, 24 Feb 2022 12:27:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5262E8D0002; Thu, 24 Feb 2022 07:27:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4AD2D8D0001; Thu, 24 Feb 2022 07:27:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 34F518D0002; Thu, 24 Feb 2022 07:27:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0224.hostedemail.com [216.40.44.224]) by kanga.kvack.org (Postfix) with ESMTP id 26C498D0001 for ; Thu, 24 Feb 2022 07:27:38 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id C96DC181CBC0E for ; Thu, 24 Feb 2022 12:27:37 +0000 (UTC) X-FDA: 79177599354.28.BC0500B Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf28.hostedemail.com (Postfix) with ESMTP id 48FCDC0002 for ; Thu, 24 Feb 2022 12:27:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1645705656; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZDMbuifa4ArrlCCaskYJsKRDb81/8AtnseyfrM1WxsM=; b=DxZOsnvqxFyLBxmIpHhJdRLB9RjwN4KBjC1pRy71WRFgkXxIMPISTElCkmyLq9YHFEpnNj mthCMPnymWVkICiJMqqU5ACxrBteGElosdAtL3AxvsI169Q1uEGPIf8i+5DiSM9F6X4Pgf 1z3j5pjtoyUozbOhkEfFCLYfzaSO1gU= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-453-OfqclEZmMTaoH6E2cMJyoQ-1; Thu, 24 Feb 2022 07:27:31 -0500 X-MC-Unique: OfqclEZmMTaoH6E2cMJyoQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 594B9804311; Thu, 24 Feb 2022 12:27:28 +0000 (UTC) Received: from t480s.redhat.com (unknown [10.39.194.160]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9C4A526DC7; Thu, 24 Feb 2022 12:27:06 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: Andrew Morton , Hugh Dickins , Linus Torvalds , David Rientjes , Shakeel Butt , John Hubbard , Jason Gunthorpe , Mike Kravetz , Mike Rapoport , Yang Shi , "Kirill A . Shutemov" , Matthew Wilcox , Vlastimil Babka , Jann Horn , Michal Hocko , Nadav Amit , Rik van Riel , Roman Gushchin , Andrea Arcangeli , Peter Xu , Donald Dutile , Christoph Hellwig , Oleg Nesterov , Jan Kara , Liang Zhang , Pedro Gomes , Oded Gabbay , linux-mm@kvack.org, David Hildenbrand , Khalid Aziz Subject: [PATCH RFC 01/13] mm/rmap: fix missing swap_free() in try_to_unmap() after arch_unmap_one() failed Date: Thu, 24 Feb 2022 13:26:02 +0100 Message-Id: <20220224122614.94921-2-david@redhat.com> In-Reply-To: <20220224122614.94921-1-david@redhat.com> References: <20220224122614.94921-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-Stat-Signature: zwa6bu3rb3pbok691qffkx4zjkfsjuaa X-Rspam-User: Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=DxZOsnvq; spf=none (imf28.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.129.124) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 48FCDC0002 X-HE-Tag: 1645705657-69078 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In case arch_unmap_one() fails, we already did a swap_duplicate(). let's undo that properly via swap_free(). Fixes: ca827d55ebaa ("mm, swap: Add infrastructure for saving page metadata on swap") Cc: Khalid Aziz Signed-off-by: David Hildenbrand Reviewed-by: Khalid Aziz --- mm/rmap.c | 1 + 1 file changed, 1 insertion(+) diff --git a/mm/rmap.c b/mm/rmap.c index 6a1e8c7f6213..f825aeef61ca 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1625,6 +1625,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, break; } if (arch_unmap_one(mm, vma, address, pteval) < 0) { + swap_free(entry); set_pte_at(mm, address, pvmw.pte, pteval); ret = false; page_vma_mapped_walk_done(&pvmw); From patchwork Thu Feb 24 12:26:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 12758456 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A233C433EF for ; Thu, 24 Feb 2022 12:27:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C02AC8D0006; Thu, 24 Feb 2022 07:27:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B8BA58D0001; Thu, 24 Feb 2022 07:27:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A2CB88D0006; Thu, 24 Feb 2022 07:27:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0166.hostedemail.com [216.40.44.166]) by kanga.kvack.org (Postfix) with ESMTP id 945FD8D0001 for ; Thu, 24 Feb 2022 07:27:57 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 511468249980 for ; Thu, 24 Feb 2022 12:27:57 +0000 (UTC) X-FDA: 79177600194.21.9FB3B8B Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf28.hostedemail.com (Postfix) with ESMTP id AD11BC000C for ; Thu, 24 Feb 2022 12:27:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1645705676; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RXAvt29aks092V89TFuSrIQhc1KpFNVRX0HPI0UZk40=; b=N8adVdfaFssaNyKgZRg7trbeU5sAqBmcNlw3ktaY4bDqTS4Ew4/o7RR2iQq+3wEOMsC91f u4Ic41R9v1rVozr8iDR5IX0nL9dWdBdYLj2O0A1G1iGYQJQEud/UleIS2ONFE2zws9RFqm 7xztG7SXUVf6JEn7ZxhRh8hVFlRIZmw= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-577-PCUaTtRzMeag2gOqQMBV7w-1; Thu, 24 Feb 2022 07:27:55 -0500 X-MC-Unique: PCUaTtRzMeag2gOqQMBV7w-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E12221006AA6; Thu, 24 Feb 2022 12:27:51 +0000 (UTC) Received: from t480s.redhat.com (unknown [10.39.194.160]) by smtp.corp.redhat.com (Postfix) with ESMTP id CF4552634C; Thu, 24 Feb 2022 12:27:28 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: Andrew Morton , Hugh Dickins , Linus Torvalds , David Rientjes , Shakeel Butt , John Hubbard , Jason Gunthorpe , Mike Kravetz , Mike Rapoport , Yang Shi , "Kirill A . Shutemov" , Matthew Wilcox , Vlastimil Babka , Jann Horn , Michal Hocko , Nadav Amit , Rik van Riel , Roman Gushchin , Andrea Arcangeli , Peter Xu , Donald Dutile , Christoph Hellwig , Oleg Nesterov , Jan Kara , Liang Zhang , Pedro Gomes , Oded Gabbay , linux-mm@kvack.org, David Hildenbrand Subject: [PATCH RFC 02/13] mm/hugetlb: take src_mm->write_protect_seq in copy_hugetlb_page_range() Date: Thu, 24 Feb 2022 13:26:03 +0100 Message-Id: <20220224122614.94921-3-david@redhat.com> In-Reply-To: <20220224122614.94921-1-david@redhat.com> References: <20220224122614.94921-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-Rspam-User: Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=N8adVdfa; spf=none (imf28.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.129.124) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: AD11BC000C X-Stat-Signature: qkpot5ygwojhpicr7ozb3egdxink5qu3 X-HE-Tag: 1645705676-299849 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Let's do it just like copy_page_range(), taking the seqlock and making sure the mmap_lock is held in write mode. This allows for add a VM_BUG_ON to page_needs_cow_for_dma() and properly synchronizes cocnurrent fork() with GUP-fast of hugetlb pages, which will be relevant for further changes. Signed-off-by: David Hildenbrand --- include/linux/mm.h | 4 ++++ mm/hugetlb.c | 8 ++++++-- 2 files changed, 10 insertions(+), 2 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index a12291cfe5dd..b29cb8bacd59 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1323,6 +1323,8 @@ static inline bool is_cow_mapping(vm_flags_t flags) /* * This should most likely only be called during fork() to see whether we * should break the cow immediately for a page on the src mm. + * + * The caller has to hold the PT lock and the vma->vm_mm->->write_protect_seq. */ static inline bool page_needs_cow_for_dma(struct vm_area_struct *vma, struct page *page) @@ -1330,6 +1332,8 @@ static inline bool page_needs_cow_for_dma(struct vm_area_struct *vma, if (!is_cow_mapping(vma->vm_flags)) return false; + VM_BUG_ON(!(raw_read_seqcount(&vma->vm_mm->write_protect_seq) & 1)); + if (!test_bit(MMF_HAS_PINNED, &vma->vm_mm->flags)) return false; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 61895cc01d09..e2e2d0f3f259 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4710,6 +4710,8 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, vma->vm_start, vma->vm_end); mmu_notifier_invalidate_range_start(&range); + mmap_assert_write_locked(src); + raw_write_seqcount_begin(&src->write_protect_seq); } else { /* * For shared mappings i_mmap_rwsem must be held to call @@ -4842,10 +4844,12 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, spin_unlock(dst_ptl); } - if (cow) + if (cow) { + raw_write_seqcount_end(&src->write_protect_seq); mmu_notifier_invalidate_range_end(&range); - else + } else { i_mmap_unlock_read(mapping); + } return ret; } From patchwork Thu Feb 24 12:26:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 12758457 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E94CC433EF for ; Thu, 24 Feb 2022 12:28:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 80B398D0003; Thu, 24 Feb 2022 07:28:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7934A8D0001; Thu, 24 Feb 2022 07:28:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 65B4E8D0003; Thu, 24 Feb 2022 07:28:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0240.hostedemail.com [216.40.44.240]) by kanga.kvack.org (Postfix) with ESMTP id 5584E8D0001 for ; Thu, 24 Feb 2022 07:28:29 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 07E6B9F5F5 for ; Thu, 24 Feb 2022 12:28:29 +0000 (UTC) X-FDA: 79177601538.23.279E76D Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf03.hostedemail.com (Postfix) with ESMTP id 80B1520004 for ; Thu, 24 Feb 2022 12:28:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1645705708; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=aj4DcramAuSghxYBNgB7LsYO7zn5E/reH2YN2OU1WzY=; b=iDHvQ+dvXRL15+RSt87mXkhXRoghZXwT4zm+2KiUnnNDJ2Dn9efN1bar+htX/Wur0T+75/ WqQn9LTnFsgCj6v+/dr+PN425/AS6uSxgx4uzJUciWH+7Q8bSNT1fZmS7vQb4MzijbcgmW YSKV2vzCLKIuq5gO19gvWLZHVxn4T/M= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-592-zsLlUm8ZP7CQRtQDGjW4FQ-1; Thu, 24 Feb 2022 07:28:24 -0500 X-MC-Unique: zsLlUm8ZP7CQRtQDGjW4FQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C7B6480D6AD; Thu, 24 Feb 2022 12:28:21 +0000 (UTC) Received: from t480s.redhat.com (unknown [10.39.194.160]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4B1882634C; Thu, 24 Feb 2022 12:27:52 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: Andrew Morton , Hugh Dickins , Linus Torvalds , David Rientjes , Shakeel Butt , John Hubbard , Jason Gunthorpe , Mike Kravetz , Mike Rapoport , Yang Shi , "Kirill A . Shutemov" , Matthew Wilcox , Vlastimil Babka , Jann Horn , Michal Hocko , Nadav Amit , Rik van Riel , Roman Gushchin , Andrea Arcangeli , Peter Xu , Donald Dutile , Christoph Hellwig , Oleg Nesterov , Jan Kara , Liang Zhang , Pedro Gomes , Oded Gabbay , linux-mm@kvack.org, David Hildenbrand Subject: [PATCH RFC 03/13] mm/memory: slightly simplify copy_present_pte() Date: Thu, 24 Feb 2022 13:26:04 +0100 Message-Id: <20220224122614.94921-4-david@redhat.com> In-Reply-To: <20220224122614.94921-1-david@redhat.com> References: <20220224122614.94921-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=iDHvQ+dv; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf03.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=david@redhat.com X-Rspam-User: X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 80B1520004 X-Stat-Signature: s579fb7dkrmxj9nr8imk1goeyqffecm6 X-HE-Tag: 1645705708-335192 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Let's move the pinning check into the caller, to simplify return code logic and prepare for further changes: relocating the page_needs_cow_for_dma() into rmap handling code. While at it, remove the unused pte parameter and simplify the comments a bit. No functional change intended. Signed-off-by: David Hildenbrand --- mm/memory.c | 53 ++++++++++++++++------------------------------------- 1 file changed, 16 insertions(+), 37 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index c6177d897964..accb72a3343d 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -865,19 +865,11 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, } /* - * Copy a present and normal page if necessary. + * Copy a present and normal page. * - * NOTE! The usual case is that this doesn't need to do - * anything, and can just return a positive value. That - * will let the caller know that it can just increase - * the page refcount and re-use the pte the traditional - * way. - * - * But _if_ we need to copy it because it needs to be - * pinned in the parent (and the child should get its own - * copy rather than just a reference to the same page), - * we'll do that here and return zero to let the caller - * know we're done. + * NOTE! The usual case is that this isn't required; + * instead, the caller can just increase the page refcount + * and re-use the pte the traditional way. * * And if we need a pre-allocated page but don't yet have * one, return a negative error to let the preallocation @@ -887,25 +879,10 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, static inline int copy_present_page(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, pte_t *dst_pte, pte_t *src_pte, unsigned long addr, int *rss, - struct page **prealloc, pte_t pte, struct page *page) + struct page **prealloc, struct page *page) { struct page *new_page; - - /* - * What we want to do is to check whether this page may - * have been pinned by the parent process. If so, - * instead of wrprotect the pte on both sides, we copy - * the page immediately so that we'll always guarantee - * the pinned page won't be randomly replaced in the - * future. - * - * The page pinning checks are just "has this mm ever - * seen pinning", along with the (inexact) check of - * the page count. That might give false positives for - * for pinning, but it will work correctly. - */ - if (likely(!page_needs_cow_for_dma(src_vma, page))) - return 1; + pte_t pte; new_page = *prealloc; if (!new_page) @@ -947,14 +924,16 @@ copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, struct page *page; page = vm_normal_page(src_vma, addr, pte); - if (page) { - int retval; - - retval = copy_present_page(dst_vma, src_vma, dst_pte, src_pte, - addr, rss, prealloc, pte, page); - if (retval <= 0) - return retval; - + if (page && unlikely(page_needs_cow_for_dma(src_vma, page))) { + /* + * If this page may have been pinned by the parent process, + * copy the page immediately for the child so that we'll always + * guarantee the pinned page won't be randomly replaced in the + * future. + */ + return copy_present_page(dst_vma, src_vma, dst_pte, src_pte, + addr, rss, prealloc, page); + } else if (page) { get_page(page); page_dup_rmap(page, false); rss[mm_counter(page)]++; From patchwork Thu Feb 24 12:26:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 12758458 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4217BC433F5 for ; Thu, 24 Feb 2022 12:28:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C9E958D0002; Thu, 24 Feb 2022 07:28:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C4E5C8D0001; Thu, 24 Feb 2022 07:28:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AEE508D0002; Thu, 24 Feb 2022 07:28:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id A058C8D0001 for ; Thu, 24 Feb 2022 07:28:57 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 51090216FE for ; Thu, 24 Feb 2022 12:28:57 +0000 (UTC) X-FDA: 79177602714.12.79C5CFA Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf23.hostedemail.com (Postfix) with ESMTP id 6CD43140005 for ; Thu, 24 Feb 2022 12:28:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1645705735; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=e7ZQKYDQncYbDG1B2fFk8VM+4OlBI5GOY3f+j/Ennbk=; b=FTEhHVDanEc6bPU7rIA5fURN0HD49szatgMWcIi1s4alpCHbfhLF3H9D2X2JIUEZc+uDpV +gaQYh0IOHx7oTm27mGjnzTJ6rYBG9ADUghu+O1KXykCsEFgDFRdAxdT68s/hhaZV6emxD fDpCZb+FnNyene4iZcKWHptPSp23G3Y= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-137-l-dPh7FOOXqyuZa2skQfaQ-1; Thu, 24 Feb 2022 07:28:52 -0500 X-MC-Unique: l-dPh7FOOXqyuZa2skQfaQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 77EB851DF; Thu, 24 Feb 2022 12:28:49 +0000 (UTC) Received: from t480s.redhat.com (unknown [10.39.194.160]) by smtp.corp.redhat.com (Postfix) with ESMTP id 49E442634C; Thu, 24 Feb 2022 12:28:22 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: Andrew Morton , Hugh Dickins , Linus Torvalds , David Rientjes , Shakeel Butt , John Hubbard , Jason Gunthorpe , Mike Kravetz , Mike Rapoport , Yang Shi , "Kirill A . Shutemov" , Matthew Wilcox , Vlastimil Babka , Jann Horn , Michal Hocko , Nadav Amit , Rik van Riel , Roman Gushchin , Andrea Arcangeli , Peter Xu , Donald Dutile , Christoph Hellwig , Oleg Nesterov , Jan Kara , Liang Zhang , Pedro Gomes , Oded Gabbay , linux-mm@kvack.org, David Hildenbrand Subject: [PATCH RFC 04/13] mm/rmap: split page_dup_rmap() into page_dup_file_rmap() and page_try_dup_anon_rmap() Date: Thu, 24 Feb 2022 13:26:05 +0100 Message-Id: <20220224122614.94921-5-david@redhat.com> In-Reply-To: <20220224122614.94921-1-david@redhat.com> References: <20220224122614.94921-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=FTEhHVDa; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf23.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.129.124) smtp.mailfrom=david@redhat.com X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 6CD43140005 X-Stat-Signature: oe1pjdir8ai18zuu7ej1bswfaqyuther X-HE-Tag: 1645705736-781067 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: ... and move the special check for pinned pages into page_try_dup_anon_rmap() to prepare for tracking exclusive anonymous pages via a new pageflag, clearing it only after making sure that there are no GUP pins on the anonymous page. We really only care about pins on anonymous pages, because they are prone to getting replaced in the COW handler once mapped R/O. For !anon pages in cow-mappings (!VM_SHARED && VM_MAYWRITE) we shouldn't really care about that, at least not that I could come up with an example. Let's drop the is_cow_mapping() check from page_needs_cow_for_dma(), as we know we're dealing with anonymous pages. Also, drop the handling of pinned pages from copy_huge_pud() and add a comment if ever supporting anonymous pages on the PUD level. This is a preparation for tracking exclusivity of anonymous pages in the rmap code, and disallowing marking a page shared (-> failing to duplicate) if there are GUP pins on a page. RFC notes: if I'm missing something important for !anon pages, we could similarly handle it via page_try_dup_file_rmap(). Signed-off-by: David Hildenbrand --- include/linux/mm.h | 5 +---- include/linux/rmap.h | 48 +++++++++++++++++++++++++++++++++++++++++++- mm/huge_memory.c | 27 ++++++++----------------- mm/hugetlb.c | 16 ++++++++------- mm/memory.c | 17 +++++++++++----- mm/migrate.c | 2 +- 6 files changed, 78 insertions(+), 37 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index b29cb8bacd59..ffeb7f5fa32b 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1322,16 +1322,13 @@ static inline bool is_cow_mapping(vm_flags_t flags) /* * This should most likely only be called during fork() to see whether we - * should break the cow immediately for a page on the src mm. + * should break the cow immediately for an anon page on the src mm. * * The caller has to hold the PT lock and the vma->vm_mm->->write_protect_seq. */ static inline bool page_needs_cow_for_dma(struct vm_area_struct *vma, struct page *page) { - if (!is_cow_mapping(vma->vm_flags)) - return false; - VM_BUG_ON(!(raw_read_seqcount(&vma->vm_mm->write_protect_seq) & 1)); if (!test_bit(MMF_HAS_PINNED, &vma->vm_mm->flags)) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index e704b1a4c06c..92c3585b8c6a 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -180,11 +180,57 @@ void hugepage_add_anon_rmap(struct page *, struct vm_area_struct *, void hugepage_add_new_anon_rmap(struct page *, struct vm_area_struct *, unsigned long); -static inline void page_dup_rmap(struct page *page, bool compound) +static inline void __page_dup_rmap(struct page *page, bool compound) { atomic_inc(compound ? compound_mapcount_ptr(page) : &page->_mapcount); } +static inline void page_dup_file_rmap(struct page *page, bool compound) +{ + __page_dup_rmap(page, compound); +} + +/** + * page_try_dup_anon_rmap - try duplicating a mapping of an already mapped + * anonymous page + * @page: the page to duplicate the mapping for + * @compound: the page is mapped as compound or as a small page + * @vma: the source vma + * + * The caller needs to hold the PT lock and the vma->vma_mm->write_protect_seq. + * + * Duplicating the mapping can only fail if the page may be pinned; device + * private pages cannot get pinned and consequently this function cannot fail. + * + * If duplicating the mapping succeeds, the page has to be mapped R/O into + * the parent and the child. It must *not* get mapped writable after this call. + * + * Returns 0 if duplicating the mapping succeeded. Returns -EBUSY otherwise. + */ +static inline int page_try_dup_anon_rmap(struct page *page, bool compound, + struct vm_area_struct *vma) +{ + VM_BUG_ON_PAGE(!PageAnon(page), page); + + /* + * If this page may have been pinned by the parent process, + * don't allow to duplicate the mapping but instead require to e.g., + * copy the page immediately for the child so that we'll always + * guarantee the pinned page won't be randomly replaced in the + * future on write faults. + */ + if (likely(!is_device_private_page(page) && + unlikely(page_needs_cow_for_dma(vma, page)))) + return -EBUSY; + + /* + * It's okay to share the anon page between both processes, mapping + * the page R/O into both processes. + */ + __page_dup_rmap(page, compound); + return 0; +} + /* * Called from mm/vmscan.c to handle paging out */ diff --git a/mm/huge_memory.c b/mm/huge_memory.c index cda88d8ac1bd..c126d728b8de 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1097,23 +1097,16 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, src_page = pmd_page(pmd); VM_BUG_ON_PAGE(!PageHead(src_page), src_page); - /* - * If this page is a potentially pinned page, split and retry the fault - * with smaller page size. Normally this should not happen because the - * userspace should use MADV_DONTFORK upon pinned regions. This is a - * best effort that the pinned pages won't be replaced by another - * random page during the coming copy-on-write. - */ - if (unlikely(page_needs_cow_for_dma(src_vma, src_page))) { + get_page(src_page); + if (unlikely(page_try_dup_anon_rmap(src_page, true, src_vma))) { + /* Page maybe pinned: split and retry the fault on PTEs. */ + put_page(src_page); pte_free(dst_mm, pgtable); spin_unlock(src_ptl); spin_unlock(dst_ptl); __split_huge_pmd(src_vma, src_pmd, addr, false, NULL); return -EAGAIN; } - - get_page(src_page); - page_dup_rmap(src_page, true); add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR); out_zero_page: mm_inc_nr_ptes(dst_mm); @@ -1217,14 +1210,10 @@ int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm, /* No huge zero pud yet */ } - /* Please refer to comments in copy_huge_pmd() */ - if (unlikely(page_needs_cow_for_dma(vma, pud_page(pud)))) { - spin_unlock(src_ptl); - spin_unlock(dst_ptl); - __split_huge_pud(vma, src_pud, addr); - return -EAGAIN; - } - + /* + * TODO: once we support anonymous pages, use page_try_dup_anon_rmap() + * and split if duplicating fails. + */ pudp_set_wrprotect(src_mm, addr, src_pud); pud = pud_mkold(pud_wrprotect(pud)); set_pud_at(dst_mm, addr, dst_pud, pud); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index e2e2d0f3f259..6b928e720e83 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4781,15 +4781,18 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, get_page(ptepage); /* - * This is a rare case where we see pinned hugetlb - * pages while they're prone to COW. We need to do the - * COW earlier during fork. + * Failing to duplicate the anon rmap is a rare case + * where we see pinned hugetlb pages while they're + * prone to COW. We need to do the COW earlier during + * fork. * * When pre-allocating the page or copying data, we * need to be without the pgtable locks since we could * sleep during the process. */ - if (unlikely(page_needs_cow_for_dma(vma, ptepage))) { + if (!PageAnon(ptepage)) { + page_dup_file_rmap(ptepage, true); + } else if (page_try_dup_anon_rmap(ptepage, true, vma)) { pte_t src_pte_old = entry; struct page *new; @@ -4836,7 +4839,6 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, entry = huge_pte_wrprotect(entry); } - page_dup_rmap(ptepage, true); set_huge_pte_at(dst, addr, dst_pte, entry); hugetlb_count_add(npages, dst); } @@ -5515,7 +5517,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, ClearHPageRestoreReserve(page); hugepage_add_new_anon_rmap(page, vma, haddr); } else - page_dup_rmap(page, true); + page_dup_file_rmap(page, true); new_pte = make_huge_pte(vma, page, ((vma->vm_flags & VM_WRITE) && (vma->vm_flags & VM_SHARED))); set_huge_pte_at(mm, haddr, ptep, new_pte); @@ -5875,7 +5877,7 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, goto out_release_unlock; if (vm_shared) { - page_dup_rmap(page, true); + page_dup_file_rmap(page, true); } else { ClearHPageRestoreReserve(page); hugepage_add_new_anon_rmap(page, dst_vma, dst_addr); diff --git a/mm/memory.c b/mm/memory.c index accb72a3343d..b9602d41d907 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -828,7 +828,8 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, */ get_page(page); rss[mm_counter(page)]++; - page_dup_rmap(page, false); + /* Cannot fail as these pages cannot get pinned. */ + BUG_ON(page_try_dup_anon_rmap(page, false, src_vma)); /* * We do not preserve soft-dirty information, because so @@ -924,18 +925,24 @@ copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, struct page *page; page = vm_normal_page(src_vma, addr, pte); - if (page && unlikely(page_needs_cow_for_dma(src_vma, page))) { + if (page && PageAnon(page)) { /* * If this page may have been pinned by the parent process, * copy the page immediately for the child so that we'll always * guarantee the pinned page won't be randomly replaced in the * future. */ - return copy_present_page(dst_vma, src_vma, dst_pte, src_pte, - addr, rss, prealloc, page); + get_page(page); + if (unlikely(page_try_dup_anon_rmap(page, false, src_vma))) { + /* Page maybe pinned, we have to copy. */ + put_page(page); + return copy_present_page(dst_vma, src_vma, dst_pte, src_pte, + addr, rss, prealloc, page); + } + rss[mm_counter(page)]++; } else if (page) { get_page(page); - page_dup_rmap(page, false); + page_dup_file_rmap(page, false); rss[mm_counter(page)]++; } diff --git a/mm/migrate.c b/mm/migrate.c index c7da064b4781..524c2648ab36 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -240,7 +240,7 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma, if (PageAnon(new)) hugepage_add_anon_rmap(new, vma, pvmw.address); else - page_dup_rmap(new, true); + page_dup_file_rmap(new, true); set_huge_pte_at(vma->vm_mm, pvmw.address, pvmw.pte, pte); } else #endif From patchwork Thu Feb 24 12:26:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 12758460 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB35CC433F5 for ; Thu, 24 Feb 2022 12:29:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6AA4F8D0003; Thu, 24 Feb 2022 07:29:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 659458D0001; Thu, 24 Feb 2022 07:29:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5227C8D0003; Thu, 24 Feb 2022 07:29:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 44A968D0001 for ; Thu, 24 Feb 2022 07:29:39 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 1B755237BA for ; Thu, 24 Feb 2022 12:29:39 +0000 (UTC) X-FDA: 79177604478.03.6E20AFA Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf23.hostedemail.com (Postfix) with ESMTP id 8A1D314000A for ; Thu, 24 Feb 2022 12:29:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1645705778; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FAdSwEvV+Vd6qcTOzU2MRWGrb9N7T7CfRa1GQWshOa0=; b=ee8qG8BOAwnHc8FqWsiGclNZ5tUFePzOetYjXc6vACr7tspizZFskAGVLMOYoE5HKl3lY/ eeCEVdDU+YnuBEXdqs8jVFSf+wMdf/j6ztl8MSOhkDwWwuT53rnwJyHPyHO8fk1gGU3M+P F8mxQ/Uy2CEsQbGh76TY94wwGvnKzOE= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-644-0S8wfMKZPYe3EOPTIEkwmA-1; Thu, 24 Feb 2022 07:29:35 -0500 X-MC-Unique: 0S8wfMKZPYe3EOPTIEkwmA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 957C5804310; Thu, 24 Feb 2022 12:29:32 +0000 (UTC) Received: from t480s.redhat.com (unknown [10.39.194.160]) by smtp.corp.redhat.com (Postfix) with ESMTP id B79E426576; Thu, 24 Feb 2022 12:28:49 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: Andrew Morton , Hugh Dickins , Linus Torvalds , David Rientjes , Shakeel Butt , John Hubbard , Jason Gunthorpe , Mike Kravetz , Mike Rapoport , Yang Shi , "Kirill A . Shutemov" , Matthew Wilcox , Vlastimil Babka , Jann Horn , Michal Hocko , Nadav Amit , Rik van Riel , Roman Gushchin , Andrea Arcangeli , Peter Xu , Donald Dutile , Christoph Hellwig , Oleg Nesterov , Jan Kara , Liang Zhang , Pedro Gomes , Oded Gabbay , linux-mm@kvack.org, David Hildenbrand Subject: [PATCH RFC 05/13] mm/rmap: remove do_page_add_anon_rmap() Date: Thu, 24 Feb 2022 13:26:06 +0100 Message-Id: <20220224122614.94921-6-david@redhat.com> In-Reply-To: <20220224122614.94921-1-david@redhat.com> References: <20220224122614.94921-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 8A1D314000A X-Rspam-User: Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=ee8qG8BO; spf=none (imf23.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Stat-Signature: wnz1ezbempn6nckb8ury69rjtc4awsuw X-HE-Tag: 1645705778-365187 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: ... and instead convert page_add_anon_rmap() to accept flags. Passing flags instead of bools is usually nicer either way, and we want to more often also pass RMAP_EXCLUSIVE in follow up patches when detecting that an anonymous page is exclusive: for example, when restoring an anonymous page from a writable migration entry. This is a preparation for marking an anonymous page inside page_add_anon_rmap() as exclusive when RMAP_EXCLUSIVE is passed. Signed-off-by: David Hildenbrand --- include/linux/rmap.h | 4 +--- mm/huge_memory.c | 2 +- mm/ksm.c | 2 +- mm/memory.c | 2 +- mm/migrate.c | 2 +- mm/rmap.c | 17 +++-------------- mm/swapfile.c | 2 +- 7 files changed, 9 insertions(+), 22 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 92c3585b8c6a..f230e86b4587 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -167,9 +167,7 @@ struct anon_vma *page_get_anon_vma(struct page *page); */ void page_move_anon_rmap(struct page *, struct vm_area_struct *); void page_add_anon_rmap(struct page *, struct vm_area_struct *, - unsigned long, bool); -void do_page_add_anon_rmap(struct page *, struct vm_area_struct *, - unsigned long, int); + unsigned long, int); void page_add_new_anon_rmap(struct page *, struct vm_area_struct *, unsigned long, bool); void page_add_file_rmap(struct page *, bool); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index c126d728b8de..2ca137e01e84 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3115,7 +3115,7 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new) flush_cache_range(vma, mmun_start, mmun_start + HPAGE_PMD_SIZE); if (PageAnon(new)) - page_add_anon_rmap(new, vma, mmun_start, true); + page_add_anon_rmap(new, vma, mmun_start, RMAP_COMPOUND); else page_add_file_rmap(new, true); set_pmd_at(mm, mmun_start, pvmw->pmd, pmde); diff --git a/mm/ksm.c b/mm/ksm.c index c20bd4d9a0d9..e897577c54c5 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1153,7 +1153,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page, */ if (!is_zero_pfn(page_to_pfn(kpage))) { get_page(kpage); - page_add_anon_rmap(kpage, vma, addr, false); + page_add_anon_rmap(kpage, vma, addr, 0); newpte = mk_pte(kpage, vma->vm_page_prot); } else { newpte = pte_mkspecial(pfn_pte(page_to_pfn(kpage), diff --git a/mm/memory.c b/mm/memory.c index b9602d41d907..c2f1c2428303 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3709,7 +3709,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) page_add_new_anon_rmap(page, vma, vmf->address, false); lru_cache_add_inactive_or_unevictable(page, vma); } else { - do_page_add_anon_rmap(page, vma, vmf->address, exclusive); + page_add_anon_rmap(page, vma, vmf->address, exclusive); } set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte); diff --git a/mm/migrate.c b/mm/migrate.c index 524c2648ab36..d4d72a15224c 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -246,7 +246,7 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma, #endif { if (PageAnon(new)) - page_add_anon_rmap(new, vma, pvmw.address, false); + page_add_anon_rmap(new, vma, pvmw.address, 0); else page_add_file_rmap(new, false); set_pte_at(vma->vm_mm, pvmw.address, pvmw.pte, pte); diff --git a/mm/rmap.c b/mm/rmap.c index f825aeef61ca..902ebf99d147 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1129,26 +1129,15 @@ static void __page_check_anon_rmap(struct page *page, * @page: the page to add the mapping to * @vma: the vm area in which the mapping is added * @address: the user virtual address mapped - * @compound: charge the page as compound or small page + * @flags: the rmap flags * * The caller needs to hold the pte lock, and the page must be locked in * the anon_vma case: to serialize mapping,index checking after setting, * and to ensure that PageAnon is not being upgraded racily to PageKsm * (but PageKsm is never downgraded to PageAnon). */ -void page_add_anon_rmap(struct page *page, - struct vm_area_struct *vma, unsigned long address, bool compound) -{ - do_page_add_anon_rmap(page, vma, address, compound ? RMAP_COMPOUND : 0); -} - -/* - * Special version of the above for do_swap_page, which often runs - * into pages that are exclusively owned by the current process. - * Everybody else should continue to use page_add_anon_rmap above. - */ -void do_page_add_anon_rmap(struct page *page, - struct vm_area_struct *vma, unsigned long address, int flags) +void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma, + unsigned long address, int flags) { bool compound = flags & RMAP_COMPOUND; bool first; diff --git a/mm/swapfile.c b/mm/swapfile.c index a5183315dc58..eacda7ca7ac9 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1800,7 +1800,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, inc_mm_counter(vma->vm_mm, MM_ANONPAGES); get_page(page); if (page == swapcache) { - page_add_anon_rmap(page, vma, addr, false); + page_add_anon_rmap(page, vma, addr, 0); } else { /* ksm created a completely new copy */ page_add_new_anon_rmap(page, vma, addr, false); lru_cache_add_inactive_or_unevictable(page, vma); From patchwork Thu Feb 24 12:26:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 12758461 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BE54C433F5 for ; Thu, 24 Feb 2022 12:30:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BDA5F8D0005; Thu, 24 Feb 2022 07:30:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B88C48D0001; Thu, 24 Feb 2022 07:30:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A50F38D0005; Thu, 24 Feb 2022 07:30:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0234.hostedemail.com [216.40.44.234]) by kanga.kvack.org (Postfix) with ESMTP id 979AE8D0001 for ; Thu, 24 Feb 2022 07:30:05 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 5490A9A81E for ; Thu, 24 Feb 2022 12:30:05 +0000 (UTC) X-FDA: 79177605570.25.2A25244 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf11.hostedemail.com (Postfix) with ESMTP id D5EF440009 for ; Thu, 24 Feb 2022 12:30:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1645705804; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zhqDRec0VOKot1t14KWHy23p5AtjEJVAkoghtw0kzkE=; b=JjgUkNbQZaXIsogCoO9H5u26ncDcAK6eh5eKS9kPGX2v3K84q9cQo+iVzcnRdjkYDL4NbF EKSkb0k/ggBl7uyrzzviCruoIX24zIWHOsdhP420xNjRK61gLbweoOZkrFPW7iuvJB1c/m Yj4kCNTeteAxlntjlwKT3EMJ+QU+Koo= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-53-C2te0jKPMjmNIiCZmNMFvw-1; Thu, 24 Feb 2022 07:30:01 -0500 X-MC-Unique: C2te0jKPMjmNIiCZmNMFvw-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 7CC3A80D6AD; Thu, 24 Feb 2022 12:29:57 +0000 (UTC) Received: from t480s.redhat.com (unknown [10.39.194.160]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3218C2634C; Thu, 24 Feb 2022 12:29:32 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: Andrew Morton , Hugh Dickins , Linus Torvalds , David Rientjes , Shakeel Butt , John Hubbard , Jason Gunthorpe , Mike Kravetz , Mike Rapoport , Yang Shi , "Kirill A . Shutemov" , Matthew Wilcox , Vlastimil Babka , Jann Horn , Michal Hocko , Nadav Amit , Rik van Riel , Roman Gushchin , Andrea Arcangeli , Peter Xu , Donald Dutile , Christoph Hellwig , Oleg Nesterov , Jan Kara , Liang Zhang , Pedro Gomes , Oded Gabbay , linux-mm@kvack.org, David Hildenbrand Subject: [PATCH RFC 06/13] mm/rmap: pass rmap flags to hugepage_add_anon_rmap() Date: Thu, 24 Feb 2022 13:26:07 +0100 Message-Id: <20220224122614.94921-7-david@redhat.com> In-Reply-To: <20220224122614.94921-1-david@redhat.com> References: <20220224122614.94921-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-Rspamd-Queue-Id: D5EF440009 X-Stat-Signature: io8p9kw93rto1jizgns898oiyk797ypf X-Rspam-User: Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=JjgUkNbQ; spf=none (imf11.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.129.124) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspamd-Server: rspam05 X-HE-Tag: 1645705804-847309 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Let's prepare for passing RMAP_EXCLUSIVE, similarly as we do for page_add_anon_rmap() now. RMAP_COMPOUND is implicit for hugetlb pages and ignored. Signed-off-by: David Hildenbrand --- include/linux/rmap.h | 2 +- mm/migrate.c | 2 +- mm/rmap.c | 8 +++++--- 3 files changed, 7 insertions(+), 5 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index f230e86b4587..593a4566420f 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -174,7 +174,7 @@ void page_add_file_rmap(struct page *, bool); void page_remove_rmap(struct page *, bool); void hugepage_add_anon_rmap(struct page *, struct vm_area_struct *, - unsigned long); + unsigned long, int); void hugepage_add_new_anon_rmap(struct page *, struct vm_area_struct *, unsigned long); diff --git a/mm/migrate.c b/mm/migrate.c index d4d72a15224c..709cb11d5b81 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -238,7 +238,7 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma, pte = pte_mkhuge(pte); pte = arch_make_huge_pte(pte, shift, vma->vm_flags); if (PageAnon(new)) - hugepage_add_anon_rmap(new, vma, pvmw.address); + hugepage_add_anon_rmap(new, vma, pvmw.address, 0); else page_dup_file_rmap(new, true); set_huge_pte_at(vma->vm_mm, pvmw.address, pvmw.pte, pte); diff --git a/mm/rmap.c b/mm/rmap.c index 902ebf99d147..bafdb7f70cec 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2409,9 +2409,11 @@ void rmap_walk_locked(struct page *page, struct rmap_walk_control *rwc) * The following two functions are for anonymous (private mapped) hugepages. * Unlike common anonymous pages, anonymous hugepages have no accounting code * and no lru code, because we handle hugepages differently from common pages. + * + * RMAP_COMPOUND is ignored. */ -void hugepage_add_anon_rmap(struct page *page, - struct vm_area_struct *vma, unsigned long address) +void hugepage_add_anon_rmap(struct page *page, struct vm_area_struct *vma, + unsigned long address, int flags) { struct anon_vma *anon_vma = vma->anon_vma; int first; @@ -2421,7 +2423,7 @@ void hugepage_add_anon_rmap(struct page *page, /* address might be in next vma when migration races vma_adjust */ first = atomic_inc_and_test(compound_mapcount_ptr(page)); if (first) - __page_set_anon_rmap(page, vma, address, 0); + __page_set_anon_rmap(page, vma, address, flags & RMAP_EXCLUSIVE); } void hugepage_add_new_anon_rmap(struct page *page, From patchwork Thu Feb 24 12:26:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 12758462 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FE2FC433EF for ; Thu, 24 Feb 2022 12:31:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 40FD98D0003; Thu, 24 Feb 2022 07:31:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3BF758D0001; Thu, 24 Feb 2022 07:31:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2886E8D0003; Thu, 24 Feb 2022 07:31:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0049.hostedemail.com [216.40.44.49]) by kanga.kvack.org (Postfix) with ESMTP id 1BCAF8D0001 for ; Thu, 24 Feb 2022 07:31:06 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id CC23A9F846 for ; Thu, 24 Feb 2022 12:31:05 +0000 (UTC) X-FDA: 79177608090.25.E754115 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf08.hostedemail.com (Postfix) with ESMTP id E3CB4160006 for ; Thu, 24 Feb 2022 12:31:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1645705864; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=a7/3du5HvRrnFGXwjxtGM6+I7ouu7VFk3hP1L/xSkXg=; b=ambV93SgtrFiwEjLeT2YKVAGSLVROmhRaRD5x+Sq+iuznPLcgbIxN/kd27kCrBFTflxLud 5GLaLdfUXowa63Abj5fyUE0afjbLBbKf4bRIerFCH1liuUxwSRFzt7cZXCWtChxqDc3QII nVvcjCJiU1/2EKJSdD1ewqMLosJo8yE= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-245-V9iCVZ-EMhqdNW1jv_ymsg-1; Thu, 24 Feb 2022 07:31:01 -0500 X-MC-Unique: V9iCVZ-EMhqdNW1jv_ymsg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 24C77804312; Thu, 24 Feb 2022 12:30:58 +0000 (UTC) Received: from t480s.redhat.com (unknown [10.39.194.160]) by smtp.corp.redhat.com (Postfix) with ESMTP id F17F826576; Thu, 24 Feb 2022 12:29:57 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: Andrew Morton , Hugh Dickins , Linus Torvalds , David Rientjes , Shakeel Butt , John Hubbard , Jason Gunthorpe , Mike Kravetz , Mike Rapoport , Yang Shi , "Kirill A . Shutemov" , Matthew Wilcox , Vlastimil Babka , Jann Horn , Michal Hocko , Nadav Amit , Rik van Riel , Roman Gushchin , Andrea Arcangeli , Peter Xu , Donald Dutile , Christoph Hellwig , Oleg Nesterov , Jan Kara , Liang Zhang , Pedro Gomes , Oded Gabbay , linux-mm@kvack.org, David Hildenbrand Subject: [PATCH RFC 07/13] mm/rmap: use page_move_anon_rmap() when reusing a mapped PageAnon() page exclusively Date: Thu, 24 Feb 2022 13:26:08 +0100 Message-Id: <20220224122614.94921-8-david@redhat.com> In-Reply-To: <20220224122614.94921-1-david@redhat.com> References: <20220224122614.94921-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: E3CB4160006 X-Stat-Signature: mo57c5q9ui5ai4nynos13pxiyp77k6tp Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=ambV93Sg; spf=none (imf08.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.129.124) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspam-User: X-HE-Tag: 1645705864-136059 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We want to mark anonymous pages exclusive, and when using page_move_anon_rmap() we know that we are the exclusive user, as properly documented. This is a preparation for marking anonymous pages exclusive in page_move_anon_rmap(). In both instances, we're holding page lock and are sure that we're the exclusive owner (page_count() == 1). hugetlb already properly uses page_move_anon_rmap() in the write fault handler. Signed-off-by: David Hildenbrand --- mm/huge_memory.c | 2 ++ mm/memory.c | 1 + 2 files changed, 3 insertions(+) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2ca137e01e84..7b9e76ba3f2f 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1317,6 +1317,8 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf) try_to_free_swap(page); if (page_count(page) == 1) { pmd_t entry; + + page_move_anon_rmap(page, vma); entry = pmd_mkyoung(orig_pmd); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); if (pmdp_set_access_flags(vma, haddr, vmf->pmd, entry, 1)) diff --git a/mm/memory.c b/mm/memory.c index c2f1c2428303..1ab0dfbec288 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3307,6 +3307,7 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) * and the page is locked, it's dark out, and we're wearing * sunglasses. Hit it. */ + page_move_anon_rmap(page, vma); unlock_page(page); wp_page_reuse(vmf); return VM_FAULT_WRITE; From patchwork Thu Feb 24 12:26:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 12758463 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D267C433F5 for ; Thu, 24 Feb 2022 12:31:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1EC8D8D0002; Thu, 24 Feb 2022 07:31:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 19C3D8D0001; Thu, 24 Feb 2022 07:31:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 063C38D0002; Thu, 24 Feb 2022 07:31:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0128.hostedemail.com [216.40.44.128]) by kanga.kvack.org (Postfix) with ESMTP id ECEA38D0001 for ; Thu, 24 Feb 2022 07:31:45 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 850B69C16B for ; Thu, 24 Feb 2022 12:31:45 +0000 (UTC) X-FDA: 79177609770.18.907FA97 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf08.hostedemail.com (Postfix) with ESMTP id CD2F816000E for ; Thu, 24 Feb 2022 12:31:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1645705904; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7nUAWCzqfYP1NtmLndy1vBsD7OpySu7Xd448I+9wceU=; b=JXi+/fmj7vYvZBjVz5f+65aN3u4Y5qujrZB6vv1a+O8j6lhy/l3jAPmkgU3x2Iz9m9UxVT X6T/ee9o3UKuwf8mwR8/HNevXbwyf+fDOAZ2JNCR2ZzwxyFefZ5mT4nCyRPHFiSSCRA+Pc GkWaIWPyVsifUCQmO45BqixDeI4l2dI= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-428-9lLZ9zAbOP6l5JoEB38wQw-1; Thu, 24 Feb 2022 07:31:41 -0500 X-MC-Unique: 9lLZ9zAbOP6l5JoEB38wQw-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 1F06A1091DA0; Thu, 24 Feb 2022 12:31:38 +0000 (UTC) Received: from t480s.redhat.com (unknown [10.39.194.160]) by smtp.corp.redhat.com (Postfix) with ESMTP id 89F5226576; Thu, 24 Feb 2022 12:30:58 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: Andrew Morton , Hugh Dickins , Linus Torvalds , David Rientjes , Shakeel Butt , John Hubbard , Jason Gunthorpe , Mike Kravetz , Mike Rapoport , Yang Shi , "Kirill A . Shutemov" , Matthew Wilcox , Vlastimil Babka , Jann Horn , Michal Hocko , Nadav Amit , Rik van Riel , Roman Gushchin , Andrea Arcangeli , Peter Xu , Donald Dutile , Christoph Hellwig , Oleg Nesterov , Jan Kara , Liang Zhang , Pedro Gomes , Oded Gabbay , linux-mm@kvack.org, David Hildenbrand Subject: [PATCH RFC 08/13] mm/page-flags: reuse PG_slab as PG_anon_exclusive for PageAnon() pages Date: Thu, 24 Feb 2022 13:26:09 +0100 Message-Id: <20220224122614.94921-9-david@redhat.com> In-Reply-To: <20220224122614.94921-1-david@redhat.com> References: <20220224122614.94921-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-Rspamd-Queue-Id: CD2F816000E X-Stat-Signature: g7835n4cbte9wk8ifo9ys6o1kiqt9m85 Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="JXi+/fmj"; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf08.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=david@redhat.com X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1645705904-47874 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The basic question we would like to have a reliable and efficient answer to is: is this anonymous page exclusive to a single process or might it be shared? In an ideal world, we'd have a spare pageflag. Unfortunately, pageflags don't grow on trees, so we have to get a little creative for the time being. Introduce a way to mark an anonymous page as exclusive, with the ultimate goal of teaching our COW logic to not do "wrong COWs", whereby GUP pins lose consistency with the pages mapped into the page table, resulting in reported memory corruptions. Most pageflags already have semantics for anonymous pages, so we're left with reusing PG_slab for our purpose: for PageAnon() pages PG_slab now translates to PG_anon_exclusive, teach some in-kernel code that manually handles PG_slab about that. Add a spoiler on the semantics of PG_anon_exclusive as documentation. More documentation will be contained in the code that actually makes use of PG_anon_exclusive. We won't be clearing PG_anon_exclusive on destructive unmapping (i.e., zapping) of page table entries, page freeing code will handle that when also invalidate page->mapping to not indicate PageAnon() anymore. Letting information about exclusivity stick around will be an important property when adding sanity checks to unpinning code. RFC notes: in-tree tools/cgroup/memcg_slabinfo.py looks like it might need some care. We'd have to lookup the head page and check if PageAnon() is set. Similarly, tools living outside the kernel repository like crash and makedumpfile might need adaptions. Cc: Roman Gushchin Signed-off-by: David Hildenbrand --- fs/proc/page.c | 3 +- include/linux/page-flags.h | 124 ++++++++++++++++++++++++++++++++- include/trace/events/mmflags.h | 2 +- mm/hugetlb.c | 4 ++ mm/memory-failure.c | 24 +++++-- mm/memremap.c | 11 +++ mm/page_alloc.c | 13 ++++ 7 files changed, 170 insertions(+), 11 deletions(-) diff --git a/fs/proc/page.c b/fs/proc/page.c index 9f1077d94cde..61187568e5f7 100644 --- a/fs/proc/page.c +++ b/fs/proc/page.c @@ -182,8 +182,7 @@ u64 stable_page_flags(struct page *page) u |= kpf_copy_bit(k, KPF_LOCKED, PG_locked); - u |= kpf_copy_bit(k, KPF_SLAB, PG_slab); - if (PageTail(page) && PageSlab(compound_head(page))) + if (PageSlab(page) || PageSlab(compound_head(page))) u |= 1 << KPF_SLAB; u |= kpf_copy_bit(k, KPF_ERROR, PG_error); diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 1c3b6e5c8bfd..b3c2cf71467e 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -107,7 +107,7 @@ enum pageflags { PG_workingset, PG_waiters, /* Page has waiters, check its waitqueue. Must be bit #7 and in the same byte as "PG_locked" */ PG_error, - PG_slab, + PG_slab, /* Slab page if !PageAnon() */ PG_owner_priv_1, /* Owner use. If pagecache, fs may use*/ PG_arch_1, PG_reserved, @@ -142,6 +142,63 @@ enum pageflags { PG_readahead = PG_reclaim, + /* + * Depending on the way an anonymous folio can be mapped into a page + * table (e.g., single PMD/PUD/CONT of the head page vs. PTE-mapped + * THP), PG_anon_exclusive may be set only for the head page or for + * subpages of an anonymous folio. + * + * PG_anon_exclusive is *usually* only expressive in combination with a + * page table entry. Depending on the page table entry type it might + * store the following information: + * + * Is what's mapped via this page table entry exclusive to the + * single process and can be mapped writable without further + * checks? If not, it might be shared and we might have to COW. + * + * For now, we only expect PTE-mapped THPs to make use of + * PG_anon_exclusive in subpages. For other anonymous compound + * folios (i.e., hugetlb), only the head page is logically mapped and + * holds this information. + * + * For example, an exclusive, PMD-mapped THP only has PG_anon_exclusive + * set on the head page. When replacing the PMD by a page table full + * of PTEs, PG_anon_exclusive, if set on the head page, will be set on + * all tail pages accordingly. Note that converting from a PTE-mapping + * to a PMD mapping using the same compound page is currently not + * possible and consequently doesn't require care. + * + * If GUP wants to take a reliable pin (FOLL_PIN) on an anonymous page, + * it should only pin if the relevant PG_anon_bit is set. In that case, + * the pin will be fully reliable and stay consistent with the pages + * mapped into the page table, as the bit cannot get cleared (e.g., by + * fork(), KSM) while the page is pinned. For anonymous pages that + * are mapped R/W, PG_anon_exclusive can be assumed to always be set + * because such pages cannot possibly be shared. + * + * The page table lock protecting the page table entry is the primary + * synchronization mechanism for PG_anon_exclusive; GUP-fast that does + * not take the PT lock needs special care when trying to clear the + * flag. + * + * Page table entry types and PG_anon_exclusive: + * * Present: PG_anon_exclusive applies. + * * Swap: the information is lost. PG_anon_exclusive was cleared. + * * Migration: the entry holds this information instead. + * PG_anon_exclusive was cleared. + * * Device private: PG_anon_exclusive applies. + * * Device exclusive: PG_anon_exclusive applies. + * * HW Poison: PG_anon_exclusive is stale and not changed. + * + * If the page may be pinned (FOLL_PIN), clearing PG_anon_exclusive is + * not allowed and the flag will stick around until the page is freed + * and folio->mapping is cleared. + * + * Before clearing folio->mapping, PG_anon_exclusive has to be cleared + * to not result in PageSlab() false positives. + */ + PG_anon_exclusive = PG_slab, + /* Filesystems */ PG_checked = PG_owner_priv_1, @@ -425,7 +482,6 @@ PAGEFLAG(Active, active, PF_HEAD) __CLEARPAGEFLAG(Active, active, PF_HEAD) TESTCLEARFLAG(Active, active, PF_HEAD) PAGEFLAG(Workingset, workingset, PF_HEAD) TESTCLEARFLAG(Workingset, workingset, PF_HEAD) -__PAGEFLAG(Slab, slab, PF_NO_TAIL) __PAGEFLAG(SlobFree, slob_free, PF_NO_TAIL) PAGEFLAG(Checked, checked, PF_NO_COMPOUND) /* Used by some filesystems */ @@ -920,6 +976,70 @@ extern bool is_free_buddy_page(struct page *page); __PAGEFLAG(Isolated, isolated, PF_ANY); +static __always_inline bool folio_test_slab(struct folio *folio) +{ + return !folio_test_anon(folio) && + test_bit(PG_slab, folio_flags(folio, FOLIO_PF_NO_TAIL)); +} + +static __always_inline int PageSlab(struct page *page) +{ + return !PageAnon(page) && + test_bit(PG_slab, &PF_NO_TAIL(page, 0)->flags); +} + +static __always_inline void __folio_set_slab(struct folio *folio) +{ + VM_BUG_ON(folio_test_anon(folio)); + __set_bit(PG_slab, folio_flags(folio, FOLIO_PF_NO_TAIL)); +} + +static __always_inline void __SetPageSlab(struct page *page) +{ + VM_BUG_ON_PGFLAGS(PageAnon(page), page); + __set_bit(PG_slab, &PF_NO_TAIL(page, 1)->flags); +} + +static __always_inline void __folio_clear_slab(struct folio *folio) +{ + VM_BUG_ON(folio_test_anon(folio)); + __clear_bit(PG_slab, folio_flags(folio, FOLIO_PF_NO_TAIL)); +} + +static __always_inline void __ClearPageSlab(struct page *page) +{ + VM_BUG_ON_PGFLAGS(PageAnon(page), page); + __clear_bit(PG_slab, &PF_NO_TAIL(page, 1)->flags); +} + +static __always_inline int PageAnonExclusive(struct page *page) +{ + VM_BUG_ON_PGFLAGS(!PageAnon(page), page); + VM_BUG_ON_PGFLAGS(PageHuge(page) && !PageHead(page), page); + return test_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags); +} + +static __always_inline void SetPageAnonExclusive(struct page *page) +{ + VM_BUG_ON_PGFLAGS(!PageAnon(page) || PageKsm(page), page); + VM_BUG_ON_PGFLAGS(PageHuge(page) && !PageHead(page), page); + set_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags); +} + +static __always_inline void ClearPageAnonExclusive(struct page *page) +{ + VM_BUG_ON_PGFLAGS(!PageAnon(page) || PageKsm(page), page); + VM_BUG_ON_PGFLAGS(PageHuge(page) && !PageHead(page), page); + clear_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags); +} + +static __always_inline void __ClearPageAnonExclusive(struct page *page) +{ + VM_BUG_ON_PGFLAGS(!PageAnon(page), page); + VM_BUG_ON_PGFLAGS(PageHuge(page) && !PageHead(page), page); + __clear_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags); +} + #ifdef CONFIG_MMU #define __PG_MLOCKED (1UL << PG_mlocked) #else diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h index 116ed4d5d0f8..5662f74e027f 100644 --- a/include/trace/events/mmflags.h +++ b/include/trace/events/mmflags.h @@ -103,7 +103,7 @@ {1UL << PG_lru, "lru" }, \ {1UL << PG_active, "active" }, \ {1UL << PG_workingset, "workingset" }, \ - {1UL << PG_slab, "slab" }, \ + {1UL << PG_slab, "slab/anon_exclusive" }, \ {1UL << PG_owner_priv_1, "owner_priv_1" }, \ {1UL << PG_arch_1, "arch_1" }, \ {1UL << PG_reserved, "reserved" }, \ diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6b928e720e83..a4b769f3d832 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1669,6 +1669,10 @@ void free_huge_page(struct page *page) VM_BUG_ON_PAGE(page_mapcount(page), page); hugetlb_set_page_subpool(page, NULL); + if (PageAnon(page)) { + __ClearPageAnonExclusive(page); + wmb(); /* avoid PageSlab() false positives */ + } page->mapping = NULL; restore_reserve = HPageRestoreReserve(page); ClearHPageRestoreReserve(page); diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 97a9ed8f87a9..176fa2fbd699 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1470,17 +1470,29 @@ static int identify_page_state(unsigned long pfn, struct page *p, * The first check uses the current page flags which may not have any * relevant information. The second check with the saved page flags is * carried out only if the first check can't determine the page status. + * + * Note that PG_slab is also used as PG_anon_exclusive for PageAnon() + * pages. Most of these pages should have been handled previously, + * however, let's play safe and verify via PageAnon(). */ - for (ps = error_states;; ps++) - if ((p->flags & ps->mask) == ps->res) - break; + for (ps = error_states;; ps++) { + if ((p->flags & ps->mask) != ps->res) + continue; + if ((ps->type == MF_MSG_SLAB) && PageAnon(p)) + continue; + break; + } page_flags |= (p->flags & (1UL << PG_dirty)); if (!ps->mask) - for (ps = error_states;; ps++) - if ((page_flags & ps->mask) == ps->res) - break; + for (ps = error_states;; ps++) { + if ((page_flags & ps->mask) != ps->res) + continue; + if ((ps->type == MF_MSG_SLAB) && PageAnon(p)) + continue; + break; + } return page_action(ps, p, pfn); } diff --git a/mm/memremap.c b/mm/memremap.c index 6aa5f0c2d11f..a6b82ae8b3e6 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -478,6 +478,17 @@ void free_devmap_managed_page(struct page *page) mem_cgroup_uncharge(page_folio(page)); + /* + * Note: we don't expect anonymous compound pages yet. Once supported + * and we could PTE-map them similar to THP, we'd have to clear + * PG_anon_exclusive on all tail pages. + */ + VM_BUG_ON_PAGE(PageAnon(page) && PageCompound(page), page); + if (PageAnon(page)) { + __ClearPageAnonExclusive(page); + wmb(); /* avoid PageSlab() false positives */ + } + /* * When a device_private page is freed, the page->mapping field * may still contain a (stale) mapping value. For example, the diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3589febc6d31..e59e68a61d1a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1297,6 +1297,7 @@ static __always_inline bool free_pages_prepare(struct page *page, { int bad = 0; bool skip_kasan_poison = should_skip_kasan_poison(page, fpi_flags); + const bool anon = PageAnon(page); VM_BUG_ON_PAGE(PageTail(page), page); @@ -1329,6 +1330,14 @@ static __always_inline bool free_pages_prepare(struct page *page, ClearPageHasHWPoisoned(page); } for (i = 1; i < (1 << order); i++) { + /* + * Freeing a previously PTE-mapped THP can have + * PG_anon_exclusive set on tail pages. Clear it + * manually as it's overloaded with PG_slab that we + * want to catch in check_free_page(). + */ + if (anon) + __ClearPageAnonExclusive(page + i); if (compound) bad += free_tail_pages_check(page, page + i); if (unlikely(check_free_page(page + i))) { @@ -1338,6 +1347,10 @@ static __always_inline bool free_pages_prepare(struct page *page, (page + i)->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; } } + if (anon) { + __ClearPageAnonExclusive(page); + wmb(); /* avoid PageSlab() false positives */ + } if (PageMappingFlags(page)) page->mapping = NULL; if (memcg_kmem_enabled() && PageMemcgKmem(page)) From patchwork Thu Feb 24 12:26:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 12758464 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F665C433EF for ; Thu, 24 Feb 2022 12:33:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 176308D0002; Thu, 24 Feb 2022 07:33:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 125E38D0001; Thu, 24 Feb 2022 07:33:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EE1A98D0002; Thu, 24 Feb 2022 07:33:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0253.hostedemail.com [216.40.44.253]) by kanga.kvack.org (Postfix) with ESMTP id DAE858D0001 for ; Thu, 24 Feb 2022 07:33:02 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 7796F181CEEB8 for ; Thu, 24 Feb 2022 12:33:02 +0000 (UTC) X-FDA: 79177613004.16.6D9E799 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf11.hostedemail.com (Postfix) with ESMTP id BA60340002 for ; Thu, 24 Feb 2022 12:33:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1645705981; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1KLYI5BQmAXgqMjqGu1RIBhQT13YqDs3AZG7JcuZDXo=; b=E+OfL6Ov+hgSZRF5WCoj/TTeHF0E2JbxciQP+k6oiiODgmwg9xWLc8GtiqmrweapLiBWK1 zzSs72vU9GJh0oJ41RnElcHfQIzsNOsrQJsesz9TrdVcH+LppC3xZJX1Qzq33HfiSogOpa hCZnBFGZ4qiXjYODx9GZw3jsD2WWsrs= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-373-3bMsiQczMEOW_6svZ3h-aA-1; Thu, 24 Feb 2022 07:32:58 -0500 X-MC-Unique: 3bMsiQczMEOW_6svZ3h-aA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id AFF781006AA6; Thu, 24 Feb 2022 12:32:54 +0000 (UTC) Received: from t480s.redhat.com (unknown [10.39.194.160]) by smtp.corp.redhat.com (Postfix) with ESMTP id 948932635E; Thu, 24 Feb 2022 12:31:38 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: Andrew Morton , Hugh Dickins , Linus Torvalds , David Rientjes , Shakeel Butt , John Hubbard , Jason Gunthorpe , Mike Kravetz , Mike Rapoport , Yang Shi , "Kirill A . Shutemov" , Matthew Wilcox , Vlastimil Babka , Jann Horn , Michal Hocko , Nadav Amit , Rik van Riel , Roman Gushchin , Andrea Arcangeli , Peter Xu , Donald Dutile , Christoph Hellwig , Oleg Nesterov , Jan Kara , Liang Zhang , Pedro Gomes , Oded Gabbay , linux-mm@kvack.org, David Hildenbrand Subject: [PATCH RFC 09/13] mm: remember exclusively mapped anonymous pages with PG_anon_exclusive Date: Thu, 24 Feb 2022 13:26:10 +0100 Message-Id: <20220224122614.94921-10-david@redhat.com> In-Reply-To: <20220224122614.94921-1-david@redhat.com> References: <20220224122614.94921-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: BA60340002 X-Stat-Signature: uz5m677ggkrnggpk1py4ipq7wxr9fhmu Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=E+OfL6Ov; spf=none (imf11.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.129.124) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspam-User: X-HE-Tag: 1645705981-76579 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Let's mark exclusively mapped anonymous pages with PG_anon_exclusive as exclusive, and use that information to make GUP pins reliable and stay consistent with the page mapped into the page table even if the page table entry gets write-protected. With that information at hand, we can extend our COW logic to always reuse anonymous pages that are exclusive. For anonymous pages that might be shared, the existing logic applies. As already documented, PG_anon_exclusive is usually only expressive in combination with a page table entry. Especially PTE vs. PMD-mapped anonymous pages require more thought, some examples: due to mremap() we can easily have a single compound page PTE-mapped into multiple page tables exclusively in a single process -- multiple page table locks apply. Further, due to MADV_WIPEONFORK we might not necessarily write-protect all PTEs, and only some subpages might be pinned. Long story short: once PTE-mapped, we have to track information about exclusivity per sub-page, but until then, we can just track it for the compound page in the head page and not having to update a whole bunch of subpages all of the time for a simple PMD mapping of a THP. For simplicity, this commit mostly talks about "anonymous pages", while it's for THP actually "the part of an anonymous folio referenced via a page table entry". To not spill PG_anon_exclusive code all over the mm code-base, we let the anon rmap code to handle all PG_anon_exclusive logic it can easily handle. If a writable, present page table entry points at an anonymous (sub)page, that (sub)page must be PG_anon_exclusive. If GUP wants to take a reliably pin (FOLL_PIN) on an anonymous page references via a present page table entry, it must only pin if PG_anon_exclusive is set for the mapped (sub)page. This commit doesn't adjust GUP, so this is only implicitly handled for FOLL_WRITE, follow-up commits will teach GUP to also respect it for FOLL_PIN without !FOLL_WRITE, to make all GUP pins of anonymous pages fully reliable. Whenever an anonymous page is to be shared (fork(), KSM), or when temporarily unmapping an anonymous page (swap, migration), the relevant PG_anon_exclusive bit has to be cleared to mark the anonymous page possibly shared. Clearing will fail if there are GUP pins on the page: * For fork(), this means having to copy the page and not being able to share it. fork() protects against concurrent GUP using the PT lock and the src_mm->write_protect_seq. * For KSM, this means sharing will fail. For swap this means, unmapping will fail, For migration this means, migration will fail early. All three cases protect against concurrent GUP using the PT lock and a proper clear/invalidate+flush of the relevant page table entry. This fixes memory corruptions reported for FOLL_PIN | FOLL_WRITE, when a pinned page gets mapped R/O and the successive write fault ends up replacing the page instead of reusing it. It improves the situation for O_DIRECT/vmsplice/... that still use FOLL_GET instead of FOLL_PIN, if fork() is *not* involved, however swapout and fork() are still problematic. Properly using FOLL_PIN instead of FOLL_GET for these GUP users will fix the issue for them. I. Details about basic handling I.1. Fresh anonymous pages page_add_new_anon_rmap() and hugepage_add_new_anon_rmap() will mark the given page exclusive via __page_set_anon_rmap(exclusive=1). As that is the mechanism fresh anonymous pages come into life (besides migration code where we copy the page->mapping), all fresh anonymous pages will start out as exclusive. I.2. COW reuse handling of anonymous pages When a COW handler stumbles over a (sub)page that's marked exclusive, it simply reuses it. Otherwise, the handler tries harder under page lock to detect if the (sub)page is exclusive and can be reused. If exclusive, page_move_anon_rmap() will mark the given (sub)page exclusive. Note that hugetlb code does not yet check for PageAnonExclusive(), as it still uses the old COW logic that is prone to the COW security issue because hugetlb code cannot really tolerate unnecessary/wrong COW as huge pages are a scarce resource. I.3. Migration handling try_to_migrate() has to try marking an exclusive anonymous page shared via page_try_share_anon_rmap(). If it fails because there are GUP pins on the page, unmap fails. migrate_vma_collect_pmd() and __split_huge_pmd_locked() are handled similarly. Writable migration entries implicitly point at shared anonymous pages. For readable migration entries that information is stored via a new "readable-exclusive" migration entry, specific to anonymous pages. When restoring a migration entry in remove_migration_pte(), information about exlusivity is detected via the migration entry type, and RMAP_EXCLUSIVE is set accordingly for page_add_anon_rmap()/hugepage_add_anon_rmap() to restore that information. I.4. Swapout handling try_to_unmap() has to try marking the mapped page possibly shared via page_try_share_anon_rmap(). If it fails because there are GUP pins on the page, unmap fails. For now, information about exclusivity is lost. In the future, we might want to remember that information in the swap entry in some cases, however, it requires more thought, care, and a way to store that information in swap entries. I.5. Swapin handling do_swap_page() will never stumble over exclusive anonymous pages in the swap cache, as try_to_migrate() prohibits that. do_swap_page() always has to detect manually if an anonymous page is exclusive and has to set RMAP_EXCLUSIVE for page_add_anon_rmap() accordingly. I.6. THP handling __split_huge_pmd_locked() has to move the information about exclusivity from the PMD to the PTEs. a) In case we have a readable-exclusive PMD migration entry, simply insert readable-exclusive PTE migration entries. b) In case we have a present PMD entry and we don't want to freeze ("convert to migration entries"), simply forward PG_anon_exclusive to all sub-pages, no need to temporarily clear the bit. c) In case we have a present PMD entry and want to freeze, handle it similar to try_to_migrate(): try marking the page shared first. In case we fail, we ignore the "freeze" instruction and simply split ordinarily. try_to_migrate() will properly fail because the THP is still mapped via PTEs. When splitting a compound anonymous folio (THP), the information about exclusivity is implicitly handled via the migration entries: no need to replicate PG_anon_exclusive manually. I.7. fork() handling fork() handling is relatively easy, because PG_anon_exclusive is only expressive for some page table entry types. a) Present anonymous pages page_try_dup_anon_rmap() will mark the given subpage shared -- which will fail if the page is pinned. If it failed, we have to copy (or PTE-map a PMD to handle it on the PTE level). Note that device exclusive entries are just a pointer at a PageAnon() page. fork() will first convert a device exclusive entry to a present page table and handle it just like present anonymous pages. b) Device private entry Device private entries point at PageAnon() pages that cannot be mapped directly and, therefore, cannot get pinned. page_try_dup_anon_rmap() will mark the given subpage shared, which cannot fail because they cannot get pinned. c) HW poison entries PG_anon_exclusive will remain untouched and is stale -- the page table entry is just a placeholder after all. d) Migration entries Writable and readable-exclusive entries are converted to readable entries: possibly shared. I.8. mprotect() handling mprotect() only has to properly handle the new readable-exclusive migration entry: When write-protecting a migration entry that points at an anonymous page, remember the information about exclusivity via the "readable-exclusive" migration entry type. II. Migration and GUP-fast Whenever replacing a present page table entry that maps an exclusive anonymous page by a migration entry, we have to mark the page possibly shared and synchronize against GUP-fast by a proper clear/invalidate+flush to make the following scenario impossible: 1. try_to_migrate() places a migration entry after checking for GUP pins and marks the page possibly shared. 2. GUP-fast pins the page due to lack of synchronization 3. fork() converts the "writable/readable-exclusive" migration entry into a readable migration entry 4. Migration fails due to the GUP pin (failing to freeze the refcount) 5. Migration entries are restored. PG_anon_exclusive is lost -> We have a pinned page that is not marked exclusive anymore. Note that we move information about exclusivity from the page to the migration entry as it otherwise highly overcomplicates fork() and PTE-mapping a THP. III. Swapout and GUP-fast Whenever replacing a present page table entry that maps an exclusive anonymous page by a swap entry, we have to mark the page possibly shared and synchronize against GUP-fast by a proper clear/invalidate+flush to make the following scenario impossible: 1. try_to_unmap() places a swap entry after checking for GUP pins and clears exclusivity information on the page. 2. GUP-fast pins the page due to lack of synchronization. -> We have a pinned page that is not marked exclusive anymore. If we'd ever store information about exclusivity in the swap entry, similar to migration handling, the same considerations as in II would apply. This is future work. Signed-off-by: David Hildenbrand --- include/linux/rmap.h | 33 +++++++++++++++++++ include/linux/swap.h | 15 ++++++--- include/linux/swapops.h | 25 ++++++++++++++ mm/huge_memory.c | 73 +++++++++++++++++++++++++++++++++++++---- mm/hugetlb.c | 13 +++++--- mm/ksm.c | 13 +++++++- mm/memory.c | 33 ++++++++++++++----- mm/migrate.c | 44 +++++++++++++++++++++---- mm/mprotect.c | 8 +++-- mm/rmap.c | 49 ++++++++++++++++++++++++--- 10 files changed, 268 insertions(+), 38 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 593a4566420f..c4957bac4868 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -221,6 +221,7 @@ static inline int page_try_dup_anon_rmap(struct page *page, bool compound, unlikely(page_needs_cow_for_dma(vma, page)))) return -EBUSY; + ClearPageAnonExclusive(page); /* * It's okay to share the anon page between both processes, mapping * the page R/O into both processes. @@ -229,6 +230,38 @@ static inline int page_try_dup_anon_rmap(struct page *page, bool compound, return 0; } +/** + * page_try_share_anon_rmap - try marking an exclusive anonymous page possibly + * shared to prepare for KSM or temporary unmapping + * @page: the exclusive anonymous page to try marking possibly shared + * + * The caller needs to hold the PT lock and has to have the page table entry + * cleared/invalidated+flushed, to properly sync against GUP-fast. + * + * This is similar to page_try_dup_anon_rmap(), however, not used during fork() + * to duplicate a mapping, but instead to prepare for KSM or temporarily + * unmapping a page (swap, migration) via page_remove_rmap(). + * + * Marking the page shared can only fail if the page may be pinned; device + * private pages cannot get pinned and consequently this function cannot fail. + * + * Returns 0 if marking the page possibly shared succeeded. Returns -EBUSY + * otherwise. + */ +static inline int page_try_share_anon_rmap(struct page *page) +{ + VM_BUG_ON_PAGE(!PageAnon(page) || !PageAnonExclusive(page), page); + + /* See page_try_dup_anon_rmap(). */ + if (likely(!is_device_private_page(page) && + unlikely(page_maybe_dma_pinned(page)))) + return -EBUSY; + + ClearPageAnonExclusive(page); + return 0; +} + + /* * Called from mm/vmscan.c to handle paging out */ diff --git a/include/linux/swap.h b/include/linux/swap.h index b546e4bd5c5a..422765d1141c 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -78,12 +78,19 @@ static inline int current_is_kswapd(void) #endif /* - * NUMA node memory migration support + * Page migration support. + * + * SWP_MIGRATION_READ_EXCLUSIVE is only applicable to anonymous pages and + * indicates that the referenced (part of) an anonymous page is exclusive to + * a single process. For SWP_MIGRATION_WRITE, that information is implicit: + * (part of) an anonymous page that are mapped writable are exclusive to a + * single process. */ #ifdef CONFIG_MIGRATION -#define SWP_MIGRATION_NUM 2 -#define SWP_MIGRATION_READ (MAX_SWAPFILES + SWP_HWPOISON_NUM) -#define SWP_MIGRATION_WRITE (MAX_SWAPFILES + SWP_HWPOISON_NUM + 1) +#define SWP_MIGRATION_NUM 3 +#define SWP_MIGRATION_READ (MAX_SWAPFILES + SWP_HWPOISON_NUM) +#define SWP_MIGRATION_READ_EXCLUSIVE (MAX_SWAPFILES + SWP_HWPOISON_NUM + 1) +#define SWP_MIGRATION_WRITE (MAX_SWAPFILES + SWP_HWPOISON_NUM + 2) #else #define SWP_MIGRATION_NUM 0 #endif diff --git a/include/linux/swapops.h b/include/linux/swapops.h index d356ab4047f7..06280fc1c99b 100644 --- a/include/linux/swapops.h +++ b/include/linux/swapops.h @@ -194,6 +194,7 @@ static inline bool is_writable_device_exclusive_entry(swp_entry_t entry) static inline int is_migration_entry(swp_entry_t entry) { return unlikely(swp_type(entry) == SWP_MIGRATION_READ || + swp_type(entry) == SWP_MIGRATION_READ_EXCLUSIVE || swp_type(entry) == SWP_MIGRATION_WRITE); } @@ -202,11 +203,26 @@ static inline int is_writable_migration_entry(swp_entry_t entry) return unlikely(swp_type(entry) == SWP_MIGRATION_WRITE); } +static inline int is_readable_migration_entry(swp_entry_t entry) +{ + return unlikely(swp_type(entry) == SWP_MIGRATION_READ); +} + +static inline int is_readable_exclusive_migration_entry(swp_entry_t entry) +{ + return unlikely(swp_type(entry) == SWP_MIGRATION_READ_EXCLUSIVE); +} + static inline swp_entry_t make_readable_migration_entry(pgoff_t offset) { return swp_entry(SWP_MIGRATION_READ, offset); } +static inline swp_entry_t make_readable_exclusive_migration_entry(pgoff_t offset) +{ + return swp_entry(SWP_MIGRATION_READ_EXCLUSIVE, offset); +} + static inline swp_entry_t make_writable_migration_entry(pgoff_t offset) { return swp_entry(SWP_MIGRATION_WRITE, offset); @@ -224,6 +240,11 @@ static inline swp_entry_t make_readable_migration_entry(pgoff_t offset) return swp_entry(0, 0); } +static inline swp_entry_t make_readable_exclusive_migration_entry(pgoff_t offset) +{ + return swp_entry(0, 0); +} + static inline swp_entry_t make_writable_migration_entry(pgoff_t offset) { return swp_entry(0, 0); @@ -244,6 +265,10 @@ static inline int is_writable_migration_entry(swp_entry_t entry) { return 0; } +static inline int is_readable_migration_entry(swp_entry_t entry) +{ + return 0; +} #endif diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 7b9e76ba3f2f..6f2ee41cd423 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1054,7 +1054,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, swp_entry_t entry = pmd_to_swp_entry(pmd); VM_BUG_ON(!is_pmd_migration_entry(pmd)); - if (is_writable_migration_entry(entry)) { + if (!is_readable_migration_entry(entry)) { entry = make_readable_migration_entry( swp_offset(entry)); pmd = swp_entry_to_pmd(entry); @@ -1292,6 +1292,17 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf) page = pmd_page(orig_pmd); VM_BUG_ON_PAGE(!PageHead(page), page); + if (PageAnonExclusive(page)) { + pmd_t entry; + + entry = pmd_mkyoung(orig_pmd); + entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); + if (pmdp_set_access_flags(vma, haddr, vmf->pmd, entry, 1)) + update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); + spin_unlock(vmf->ptl); + return VM_FAULT_WRITE; + } + if (!trylock_page(page)) { get_page(page); spin_unlock(vmf->ptl); @@ -1741,6 +1752,7 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION if (is_swap_pmd(*pmd)) { swp_entry_t entry = pmd_to_swp_entry(*pmd); + struct page *page = pfn_swap_entry_to_page(entry); VM_BUG_ON(!is_pmd_migration_entry(*pmd)); if (is_writable_migration_entry(entry)) { @@ -1749,8 +1761,10 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, * A protection check is difficult so * just be safe and disable write */ - entry = make_readable_migration_entry( - swp_offset(entry)); + if (PageAnon(page)) + entry = make_readable_exclusive_migration_entry(swp_offset(entry)); + else + entry = make_readable_migration_entry(swp_offset(entry)); newpmd = swp_entry_to_pmd(entry); if (pmd_swp_soft_dirty(*pmd)) newpmd = pmd_swp_mksoft_dirty(newpmd); @@ -1959,6 +1973,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, pgtable_t pgtable; pmd_t old_pmd, _pmd; bool young, write, soft_dirty, pmd_migration = false, uffd_wp = false; + bool anon_exclusive = false; unsigned long addr; int i; @@ -2040,6 +2055,8 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, entry = pmd_to_swp_entry(old_pmd); page = pfn_swap_entry_to_page(entry); write = is_writable_migration_entry(entry); + if (PageAnon(page)) + anon_exclusive = is_readable_exclusive_migration_entry(entry); young = false; soft_dirty = pmd_swp_soft_dirty(old_pmd); uffd_wp = pmd_swp_uffd_wp(old_pmd); @@ -2051,6 +2068,23 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, young = pmd_young(old_pmd); soft_dirty = pmd_soft_dirty(old_pmd); uffd_wp = pmd_uffd_wp(old_pmd); + + /* + * Without "freeze", we'll simply split the PMD, propagating the + * PageAnonExclusive() flag for each PTE by setting it for + * each subpage -- no need to (temporarily) clear. + * + * With "freeze" we want to replace mapped pages by + * migration entries right away. This is only possible if we + * managed to clear PageAnonExclusive() -- see + * set_pmd_migration_entry(). + * + * In case we cannot clear PageAnonExclusive(), split the PMD + * only and let try_to_migrate_one() fail later. + */ + anon_exclusive = PageAnon(page) && PageAnonExclusive(page); + if (freeze && page_try_share_anon_rmap(page)) + freeze = false; } VM_BUG_ON_PAGE(!page_count(page), page); page_ref_add(page, HPAGE_PMD_NR - 1); @@ -2074,6 +2108,9 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, if (write) swp_entry = make_writable_migration_entry( page_to_pfn(page + i)); + else if (anon_exclusive) + swp_entry = make_readable_exclusive_migration_entry( + page_to_pfn(page + i)); else swp_entry = make_readable_migration_entry( page_to_pfn(page + i)); @@ -2085,6 +2122,8 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, } else { entry = mk_pte(page + i, READ_ONCE(vma->vm_page_prot)); entry = maybe_mkwrite(entry, vma); + if (anon_exclusive) + SetPageAnonExclusive(page + i); if (!write) entry = pte_wrprotect(entry); if (!young) @@ -2315,8 +2354,11 @@ static void __split_huge_page_tail(struct page *head, int tail, * * After successful get_page_unless_zero() might follow flags change, * for example lock_page() which set PG_waiters. + * + * Keep PG_anon_exclusive information already maintained for all + * subpages of a compound page untouched. */ - page_tail->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; + page_tail->flags &= ~(PAGE_FLAGS_CHECK_AT_PREP & ~PG_anon_exclusive); page_tail->flags |= (head->flags & ((1L << PG_referenced) | (1L << PG_swapbacked) | @@ -3070,6 +3112,7 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw, struct vm_area_struct *vma = pvmw->vma; struct mm_struct *mm = vma->vm_mm; unsigned long address = pvmw->address; + bool anon_exclusive; pmd_t pmdval; swp_entry_t entry; pmd_t pmdswp; @@ -3079,10 +3122,19 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw, flush_cache_range(vma, address, address + HPAGE_PMD_SIZE); pmdval = pmdp_invalidate(vma, address, pvmw->pmd); + + anon_exclusive = PageAnon(page) && PageAnonExclusive(page); + if (anon_exclusive && page_try_share_anon_rmap(page)) { + set_pmd_at(mm, address, pvmw->pmd, pmdval); + return; + } + if (pmd_dirty(pmdval)) set_page_dirty(page); if (pmd_write(pmdval)) entry = make_writable_migration_entry(page_to_pfn(page)); + else if (anon_exclusive) + entry = make_readable_exclusive_migration_entry(page_to_pfn(page)); else entry = make_readable_migration_entry(page_to_pfn(page)); pmdswp = swp_entry_to_pmd(entry); @@ -3116,10 +3168,17 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new) pmde = pmd_wrprotect(pmd_mkuffd_wp(pmde)); flush_cache_range(vma, mmun_start, mmun_start + HPAGE_PMD_SIZE); - if (PageAnon(new)) - page_add_anon_rmap(new, vma, mmun_start, RMAP_COMPOUND); - else + if (PageAnon(new)) { + int rmap_flags = RMAP_COMPOUND; + + if (!is_readable_migration_entry(entry)) + rmap_flags |= RMAP_EXCLUSIVE; + + page_add_anon_rmap(new, vma, mmun_start, rmap_flags); + } else { page_add_file_rmap(new, true); + } + VM_BUG_ON(pmd_write(pmde) && PageAnon(new) && !PageAnonExclusive(new)); set_pmd_at(mm, mmun_start, pvmw->pmd, pmde); if ((vma->vm_flags & VM_LOCKED) && !PageDoubleMap(new)) mlock_vma_page(new); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index a4b769f3d832..0c2f3d500487 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4767,7 +4767,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, is_hugetlb_entry_hwpoisoned(entry))) { swp_entry_t swp_entry = pte_to_swp_entry(entry); - if (is_writable_migration_entry(swp_entry) && cow) { + if (!is_readable_migration_entry(swp_entry) && cow) { /* * COW mappings require pages in both * parent and child to be set to read. @@ -6163,12 +6163,17 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma, } if (unlikely(is_hugetlb_entry_migration(pte))) { swp_entry_t entry = pte_to_swp_entry(pte); + struct page *page = pfn_swap_entry_to_page(entry); - if (is_writable_migration_entry(entry)) { + if (!is_readable_migration_entry(entry)) { pte_t newpte; - entry = make_readable_migration_entry( - swp_offset(entry)); + if (PageAnon(page)) + entry = make_readable_exclusive_migration_entry( + swp_offset(entry)); + else + entry = make_readable_migration_entry( + swp_offset(entry)); newpte = swp_entry_to_pte(entry); set_huge_swap_pte_at(mm, address, ptep, newpte, huge_page_size(h)); diff --git a/mm/ksm.c b/mm/ksm.c index e897577c54c5..c380c36f3ebd 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -866,6 +866,7 @@ static inline struct stable_node *page_stable_node(struct page *page) static inline void set_page_stable_node(struct page *page, struct stable_node *stable_node) { + VM_BUG_ON_PAGE(PageAnon(page) && PageAnonExclusive(page), page); page->mapping = (void *)((unsigned long)stable_node | PAGE_MAPPING_KSM); } @@ -1041,6 +1042,7 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page, int swapped; int err = -EFAULT; struct mmu_notifier_range range; + bool anon_exclusive; pvmw.address = page_address_in_vma(page, vma); if (pvmw.address == -EFAULT) @@ -1058,9 +1060,10 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page, if (WARN_ONCE(!pvmw.pte, "Unexpected PMD mapping?")) goto out_unlock; + anon_exclusive = PageAnonExclusive(page); if (pte_write(*pvmw.pte) || pte_dirty(*pvmw.pte) || (pte_protnone(*pvmw.pte) && pte_savedwrite(*pvmw.pte)) || - mm_tlb_flush_pending(mm)) { + anon_exclusive || mm_tlb_flush_pending(mm)) { pte_t entry; swapped = PageSwapCache(page); @@ -1088,6 +1091,12 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page, set_pte_at(mm, pvmw.address, pvmw.pte, entry); goto out_unlock; } + + if (anon_exclusive && page_try_share_anon_rmap(page)) { + set_pte_at(mm, pvmw.address, pvmw.pte, entry); + goto out_unlock; + } + if (pte_dirty(entry)) set_page_dirty(page); @@ -1146,6 +1155,8 @@ static int replace_page(struct vm_area_struct *vma, struct page *page, pte_unmap_unlock(ptep, ptl); goto out_mn; } + VM_BUG_ON_PAGE(PageAnonExclusive(page), page); + VM_BUG_ON_PAGE(PageAnon(kpage) && PageAnonExclusive(kpage), kpage); /* * No need to check ksm_use_zero_pages here: we can only have a diff --git a/mm/memory.c b/mm/memory.c index 1ab0dfbec288..11577ee015be 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -720,6 +720,8 @@ static void restore_exclusive_pte(struct vm_area_struct *vma, else if (is_writable_device_exclusive_entry(entry)) pte = maybe_mkwrite(pte_mkdirty(pte), vma); + VM_BUG_ON(pte_write(pte) && !(PageAnon(page) && PageAnonExclusive(page))); + /* * No need to take a page reference as one was already * created when the swap entry was made. @@ -799,11 +801,12 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, rss[mm_counter(page)]++; - if (is_writable_migration_entry(entry) && + if (!is_readable_migration_entry(entry) && is_cow_mapping(vm_flags)) { /* - * COW mappings require pages in both - * parent and child to be set to read. + * COW mappings require pages in both parent and child + * to be set to read. A previously exclusive entry is + * now shared. */ entry = make_readable_migration_entry( swp_offset(entry)); @@ -954,6 +957,7 @@ copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, ptep_set_wrprotect(src_mm, addr, src_pte); pte = pte_wrprotect(pte); } + VM_BUG_ON(page && PageAnon(page) && PageAnonExclusive(page)); /* * If it's a shared mapping, mark it clean in @@ -2943,6 +2947,9 @@ static inline void wp_page_reuse(struct vm_fault *vmf) struct vm_area_struct *vma = vmf->vma; struct page *page = vmf->page; pte_t entry; + + VM_BUG_ON(PageAnon(page) && !PageAnonExclusive(page)); + /* * Clear the pages cpupid information as the existing * information potentially belongs to a now completely @@ -3277,6 +3284,13 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) if (PageAnon(vmf->page)) { struct page *page = vmf->page; + /* + * If the page is exclusive to this process we must reuse the + * page without further checks. + */ + if (PageAnonExclusive(page)) + goto reuse; + /* * We have to verify under page lock: these early checks are * just an optimization to avoid locking the page and freeing @@ -3309,6 +3323,7 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) */ page_move_anon_rmap(page, vma); unlock_page(page); +reuse: wp_page_reuse(vmf); return VM_FAULT_WRITE; } else if (unlikely((vma->vm_flags & (VM_WRITE|VM_SHARED)) == @@ -3689,11 +3704,12 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) * that are certainly not shared because we just allocated them without * exposing them to the swapcache. */ - if ((vmf->flags & FAULT_FLAG_WRITE) && !PageKsm(page) && - (page != swapcache || page_count(page) == 1)) { - pte = maybe_mkwrite(pte_mkdirty(pte), vma); - vmf->flags &= ~FAULT_FLAG_WRITE; - ret |= VM_FAULT_WRITE; + if (!PageKsm(page) && (page != swapcache || page_count(page) == 1)) { + if (vmf->flags & FAULT_FLAG_WRITE) { + pte = maybe_mkwrite(pte_mkdirty(pte), vma); + vmf->flags &= ~FAULT_FLAG_WRITE; + ret |= VM_FAULT_WRITE; + } exclusive = RMAP_EXCLUSIVE; } flush_icache_page(vma, page); @@ -3713,6 +3729,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) page_add_anon_rmap(page, vma, vmf->address, exclusive); } + VM_BUG_ON(!PageAnon(page) || (pte_write(pte) && !PageAnonExclusive(page))); set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte); arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte); diff --git a/mm/migrate.c b/mm/migrate.c index 709cb11d5b81..63a3241041e1 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -188,6 +188,8 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma, VM_BUG_ON_PAGE(PageTail(page), page); while (page_vma_mapped_walk(&pvmw)) { + int rmap_flags = 0; + if (PageKsm(page)) new = page; else @@ -217,6 +219,9 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma, else if (pte_swp_uffd_wp(*pvmw.pte)) pte = pte_mkuffd_wp(pte); + if (PageAnon(new) && !is_readable_migration_entry(entry)) + rmap_flags |= RMAP_EXCLUSIVE; + if (unlikely(is_device_private_page(new))) { if (pte_write(pte)) entry = make_writable_device_private_entry( @@ -238,7 +243,8 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma, pte = pte_mkhuge(pte); pte = arch_make_huge_pte(pte, shift, vma->vm_flags); if (PageAnon(new)) - hugepage_add_anon_rmap(new, vma, pvmw.address, 0); + hugepage_add_anon_rmap(new, vma, pvmw.address, + rmap_flags); else page_dup_file_rmap(new, true); set_huge_pte_at(vma->vm_mm, pvmw.address, pvmw.pte, pte); @@ -246,7 +252,8 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma, #endif { if (PageAnon(new)) - page_add_anon_rmap(new, vma, pvmw.address, 0); + page_add_anon_rmap(new, vma, pvmw.address, + rmap_flags); else page_add_file_rmap(new, false); set_pte_at(vma->vm_mm, pvmw.address, pvmw.pte, pte); @@ -377,6 +384,10 @@ int folio_migrate_mapping(struct address_space *mapping, /* No turning back from here */ newfolio->index = folio->index; newfolio->mapping = folio->mapping; + /* + * Note: PG_anon_exclusive is always migrated via migration + * entries. + */ if (folio_test_swapbacked(folio)) __folio_set_swapbacked(newfolio); @@ -2300,15 +2311,34 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, * set up a special migration page table entry now. */ if (trylock_page(page)) { + bool anon_exclusive; pte_t swp_pte; + anon_exclusive = PageAnon(page) && PageAnonExclusive(page); + if (anon_exclusive) { + flush_cache_page(vma, addr, pte_pfn(*ptep)); + ptep_clear_flush(vma, addr, ptep); + + if (page_try_share_anon_rmap(page)) { + set_pte_at(mm, addr, ptep, pte); + unlock_page(page); + put_page(page); + mpfn = 0; + goto next; + } + } else { + ptep_get_and_clear(mm, addr, ptep); + } + migrate->cpages++; - ptep_get_and_clear(mm, addr, ptep); /* Setup special migration page table entry */ if (mpfn & MIGRATE_PFN_WRITE) entry = make_writable_migration_entry( page_to_pfn(page)); + else if (anon_exclusive) + entry = make_readable_exclusive_migration_entry( + page_to_pfn(page)); else entry = make_readable_migration_entry( page_to_pfn(page)); @@ -2327,12 +2357,12 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, set_pte_at(mm, addr, ptep, swp_pte); /* - * This is like regular unmap: we remove the rmap and - * drop page refcount. Page won't be freed, as we took - * a reference just above. + * This is like regular unmap: we remove the + * rmap and drop page refcount. Page won't be + * freed, as we took a reference just above. */ - page_remove_rmap(page, false); put_page(page); + page_remove_rmap(page, false); if (pte_present(pte)) unmapped++; diff --git a/mm/mprotect.c b/mm/mprotect.c index 5ca3fbcb1495..fd8713f877dd 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -141,6 +141,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, pages++; } else if (is_swap_pte(oldpte)) { swp_entry_t entry = pte_to_swp_entry(oldpte); + struct page *page = pfn_swap_entry_to_page(entry); pte_t newpte; if (is_writable_migration_entry(entry)) { @@ -148,8 +149,11 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, * A protection check is difficult so * just be safe and disable write */ - entry = make_readable_migration_entry( - swp_offset(entry)); + if (PageAnon(page)) + entry = make_readable_exclusive_migration_entry( + swp_offset(entry)); + else + entry = make_readable_migration_entry(swp_offset(entry)); newpte = swp_entry_to_pte(entry); if (pte_swp_soft_dirty(oldpte)) newpte = pte_swp_mksoft_dirty(newpte); diff --git a/mm/rmap.c b/mm/rmap.c index bafdb7f70cec..cc54bd1cce61 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1048,6 +1048,7 @@ EXPORT_SYMBOL_GPL(folio_mkclean); void page_move_anon_rmap(struct page *page, struct vm_area_struct *vma) { struct anon_vma *anon_vma = vma->anon_vma; + struct page *subpage = page; page = compound_head(page); @@ -1061,6 +1062,7 @@ void page_move_anon_rmap(struct page *page, struct vm_area_struct *vma) * PageAnon()) will not see one without the other. */ WRITE_ONCE(page->mapping, (struct address_space *) anon_vma); + SetPageAnonExclusive(subpage); } /** @@ -1078,7 +1080,7 @@ static void __page_set_anon_rmap(struct page *page, BUG_ON(!anon_vma); if (PageAnon(page)) - return; + goto out; /* * If the page isn't exclusively mapped into this vma, @@ -1097,6 +1099,9 @@ static void __page_set_anon_rmap(struct page *page, anon_vma = (void *) anon_vma + PAGE_MAPPING_ANON; WRITE_ONCE(page->mapping, (struct address_space *) anon_vma); page->index = linear_page_index(vma, address); +out: + if (exclusive) + SetPageAnonExclusive(page); } /** @@ -1156,6 +1161,8 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma, } else { first = atomic_inc_and_test(&page->_mapcount); } + VM_BUG_ON_PAGE(!first && (flags & RMAP_EXCLUSIVE), page); + VM_BUG_ON_PAGE(!first && PageAnonExclusive(page), page); if (first) { int nr = compound ? thp_nr_pages(page) : 1; @@ -1419,7 +1426,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, }; pte_t pteval; struct page *subpage; - bool ret = true; + bool anon_exclusive, ret = true; struct mmu_notifier_range range; enum ttu_flags flags = (enum ttu_flags)(long)arg; @@ -1482,6 +1489,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, subpage = page - page_to_pfn(page) + pte_pfn(*pvmw.pte); address = pvmw.address; + anon_exclusive = PageAnon(page) && PageAnonExclusive(subpage); if (PageHuge(page) && !PageAnon(page)) { /* @@ -1517,9 +1525,12 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, } } - /* Nuke the page table entry. */ + /* + * Nuke the page table entry. When having to clear + * PageAnonExclusive(), we always have to flush. + */ flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); - if (should_defer_flush(mm, flags)) { + if (should_defer_flush(mm, flags) && !anon_exclusive) { /* * We clear the PTE but do not flush so potentially * a remote CPU could still be writing to the page. @@ -1620,6 +1631,14 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, page_vma_mapped_walk_done(&pvmw); break; } + if (anon_exclusive && + page_try_share_anon_rmap(subpage)) { + swap_free(entry); + set_pte_at(mm, address, pvmw.pte, pteval); + ret = false; + page_vma_mapped_walk_done(&pvmw); + break; + } if (list_empty(&mm->mmlist)) { spin_lock(&mmlist_lock); if (list_empty(&mm->mmlist)) @@ -1720,7 +1739,7 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma, }; pte_t pteval; struct page *subpage; - bool ret = true; + bool anon_exclusive, ret = true; struct mmu_notifier_range range; enum ttu_flags flags = (enum ttu_flags)(long)arg; @@ -1779,6 +1798,7 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma, subpage = page - page_to_pfn(page) + pte_pfn(*pvmw.pte); address = pvmw.address; + anon_exclusive = PageAnon(page) && PageAnonExclusive(subpage); if (PageHuge(page) && !PageAnon(page)) { /* @@ -1830,6 +1850,9 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma, swp_entry_t entry; pte_t swp_pte; + if (anon_exclusive) + BUG_ON(page_try_share_anon_rmap(subpage)); + /* * Store the pfn of the page in a special migration * pte. do_swap_page() will wait until the migration @@ -1838,6 +1861,8 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma, entry = pte_to_swp_entry(pteval); if (is_writable_device_private_entry(entry)) entry = make_writable_migration_entry(pfn); + else if (anon_exclusive) + entry = make_readable_exclusive_migration_entry(pfn); else entry = make_readable_migration_entry(pfn); swp_pte = swp_entry_to_pte(entry); @@ -1900,6 +1925,15 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma, page_vma_mapped_walk_done(&pvmw); break; } + VM_BUG_ON_PAGE(pte_write(pteval) && PageAnon(page) && + !anon_exclusive, page); + if (anon_exclusive && + page_try_share_anon_rmap(subpage)) { + set_pte_at(mm, address, pvmw.pte, pteval); + ret = false; + page_vma_mapped_walk_done(&pvmw); + break; + } /* * Store the pfn of the page in a special migration @@ -1909,6 +1943,9 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma, if (pte_write(pteval)) entry = make_writable_migration_entry( page_to_pfn(subpage)); + else if (anon_exclusive) + entry = make_readable_exclusive_migration_entry( + page_to_pfn(subpage)); else entry = make_readable_migration_entry( page_to_pfn(subpage)); @@ -2422,6 +2459,8 @@ void hugepage_add_anon_rmap(struct page *page, struct vm_area_struct *vma, BUG_ON(!anon_vma); /* address might be in next vma when migration races vma_adjust */ first = atomic_inc_and_test(compound_mapcount_ptr(page)); + VM_BUG_ON_PAGE(!first && (flags & RMAP_EXCLUSIVE), page); + VM_BUG_ON_PAGE(!first && PageAnonExclusive(page), page); if (first) __page_set_anon_rmap(page, vma, address, flags & RMAP_EXCLUSIVE); } From patchwork Thu Feb 24 12:26:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 12758465 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75F95C433EF for ; Thu, 24 Feb 2022 12:33:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0D96A8D0003; Thu, 24 Feb 2022 07:33:40 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0895A8D0001; Thu, 24 Feb 2022 07:33:40 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E92CA8D0003; Thu, 24 Feb 2022 07:33:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id DC43D8D0001 for ; Thu, 24 Feb 2022 07:33:39 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 30BC921709 for ; Thu, 24 Feb 2022 12:33:39 +0000 (UTC) X-FDA: 79177614558.07.7B57F87 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf06.hostedemail.com (Postfix) with ESMTP id 991AB18000C for ; Thu, 24 Feb 2022 12:33:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1645706018; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QWdKc7kD/Q/4rS5vi3P5eDrUhZz3GFyt+nP81Uz1mas=; b=c07caFEcPz6vixrH7HzBJDPP6i7D5l2DUr+vm6U5al2P1GGC0MpFl6Fb/st5qc5c0wB7nW hlV7RqLv22X39k7bhg7/Gw455yuEUpCP907Nio0Z9lHHQaHGVeS4VdNoWQFT/DYocfzD0E uSFINGDtWMR7MgDoIvYJl2EMbl/CemY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-139-_LaSqInlMl2xFN2Y1YreTQ-1; Thu, 24 Feb 2022 07:33:34 -0500 X-MC-Unique: _LaSqInlMl2xFN2Y1YreTQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 2B00651EA; Thu, 24 Feb 2022 12:33:31 +0000 (UTC) Received: from t480s.redhat.com (unknown [10.39.194.160]) by smtp.corp.redhat.com (Postfix) with ESMTP id 18FB52635E; Thu, 24 Feb 2022 12:32:54 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: Andrew Morton , Hugh Dickins , Linus Torvalds , David Rientjes , Shakeel Butt , John Hubbard , Jason Gunthorpe , Mike Kravetz , Mike Rapoport , Yang Shi , "Kirill A . Shutemov" , Matthew Wilcox , Vlastimil Babka , Jann Horn , Michal Hocko , Nadav Amit , Rik van Riel , Roman Gushchin , Andrea Arcangeli , Peter Xu , Donald Dutile , Christoph Hellwig , Oleg Nesterov , Jan Kara , Liang Zhang , Pedro Gomes , Oded Gabbay , linux-mm@kvack.org, David Hildenbrand Subject: [PATCH RFC 10/13] mm/gup: disallow follow_page(FOLL_PIN) Date: Thu, 24 Feb 2022 13:26:11 +0100 Message-Id: <20220224122614.94921-11-david@redhat.com> In-Reply-To: <20220224122614.94921-1-david@redhat.com> References: <20220224122614.94921-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 991AB18000C X-Stat-Signature: 7fsb7fj39jb3cs8t47ibfbqwc77yzgwo Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=c07caFEc; spf=none (imf06.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.129.124) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspam-User: X-HE-Tag: 1645706018-782776 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We want to change the way we handle R/O pins on anonymous pages that might be shared: if we detect a possibly shared anonymous page -- mapped R/O and not !PageAnonExclusive() -- we want to trigger unsharing via a page fault, resulting in an exclusive anonymous page that can be pinned reliably without getting replaced via COW on the next write fault. However, the required page fault will be problematic for follow_page(): in contrast to ordinary GUP, follow_page() doesn't trigger faults internally. So we would have to end up failing a R/O pin via follow_page(), although there is something mapped R/O into the page table, which might be rather surprising. We don't seem to have follow_page(FOLL_PIN) users, and it's a purely internal MM function. Let's just make our life easier and the semantics of follow_page() clearer by just disallowing FOLL_PIN for follow_page() completely. Signed-off-by: David Hildenbrand --- mm/gup.c | 3 +++ mm/hugetlb.c | 8 +++++--- 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index a9d4d724aef7..7127e789d210 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -875,6 +875,9 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address, if (vma_is_secretmem(vma)) return NULL; + if (foll_flags & FOLL_PIN) + return NULL; + page = follow_page_mask(vma, address, foll_flags, &ctx); if (ctx.pgmap) put_dev_pagemap(ctx.pgmap); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 0c2f3d500487..8aaca8becf97 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6693,9 +6693,11 @@ follow_huge_pmd(struct mm_struct *mm, unsigned long address, spinlock_t *ptl; pte_t pte; - /* FOLL_GET and FOLL_PIN are mutually exclusive. */ - if (WARN_ON_ONCE((flags & (FOLL_PIN | FOLL_GET)) == - (FOLL_PIN | FOLL_GET))) + /* + * FOLL_PIN is not supported for follow_page(). Ordinary GUP goes via + * follow_hugetlb_page(). + */ + if (WARN_ON_ONCE(flags & FOLL_PIN)) return NULL; retry: From patchwork Thu Feb 24 12:26:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 12758466 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE65EC433F5 for ; Thu, 24 Feb 2022 12:33:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 742378D0005; Thu, 24 Feb 2022 07:33:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6F1288D0001; Thu, 24 Feb 2022 07:33:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5924E8D0005; Thu, 24 Feb 2022 07:33:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0191.hostedemail.com [216.40.44.191]) by kanga.kvack.org (Postfix) with ESMTP id 49AAD8D0001 for ; Thu, 24 Feb 2022 07:33:46 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 04C54181D19FE for ; Thu, 24 Feb 2022 12:33:46 +0000 (UTC) X-FDA: 79177614852.26.531A4C6 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf06.hostedemail.com (Postfix) with ESMTP id 6B25A180005 for ; Thu, 24 Feb 2022 12:33:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1645706024; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=aX4MT1vBF2wcho1CQqe6YYQGuRMDBqf7YfoAu1BRlaE=; b=KYRwYoDr/HwdBE3YFWahTYBMUSMrwGqM0AAPLA5RIPuRmaeVtij8aswifPSqiW0/2F0JB7 ZR6cp1JG3bD+iNpts8pqf5WYXkTe0EJdAdMwyY1tmHjVf6G/u7jawo0ZprGDEIJJSvJDDM Sd1IzMIHTjIZfAmLu+WEWpJ0oCYjEcQ= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-346-ZNgNINqVMgeusBOxIiaunA-1; Thu, 24 Feb 2022 07:33:41 -0500 X-MC-Unique: ZNgNINqVMgeusBOxIiaunA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B68A21006AA5; Thu, 24 Feb 2022 12:33:38 +0000 (UTC) Received: from t480s.redhat.com (unknown [10.39.194.160]) by smtp.corp.redhat.com (Postfix) with ESMTP id B96D826575; Thu, 24 Feb 2022 12:33:31 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: Andrew Morton , Hugh Dickins , Linus Torvalds , David Rientjes , Shakeel Butt , John Hubbard , Jason Gunthorpe , Mike Kravetz , Mike Rapoport , Yang Shi , "Kirill A . Shutemov" , Matthew Wilcox , Vlastimil Babka , Jann Horn , Michal Hocko , Nadav Amit , Rik van Riel , Roman Gushchin , Andrea Arcangeli , Peter Xu , Donald Dutile , Christoph Hellwig , Oleg Nesterov , Jan Kara , Liang Zhang , Pedro Gomes , Oded Gabbay , linux-mm@kvack.org, David Hildenbrand Subject: [PATCH RFC 11/13] mm: support GUP-triggered unsharing of anonymous pages Date: Thu, 24 Feb 2022 13:26:12 +0100 Message-Id: <20220224122614.94921-12-david@redhat.com> In-Reply-To: <20220224122614.94921-1-david@redhat.com> References: <20220224122614.94921-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 6B25A180005 X-Stat-Signature: xswbs8pu6ibno7x1jhxw6wqkdchqrba7 Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=KYRwYoDr; spf=none (imf06.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-HE-Tag: 1645706025-550863 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Whenever GUP currently ends up taking a R/O pin on an anonymous page that might be shared -- mapped R/O and !PageAnonExclusive() -- any write fault on the page table entry will end up replacing the mapped anonymous page due to COW, resulting in the GUP pin no longer being consistent with the page actually mapped into the page table. The possible ways to deal with this situation are: (1) Ignore and pin -- what we do right now. (2) Fail to pin -- which would be rather surprising to callers and could break user space. (3) Trigger unsharing and pin the now exclusive page -- reliable R/O pins. We want to implement 3) because it provides the clearest semantics and allows for checking in unpin_user_pages() and friends for possible BUGs: when trying to unpin a page that's no longer exclusive, clearly something went very wrong and might result in memory corruptions that might be hard to debug. So we better have a nice way to spot such issues. To implement 3), we need a way for GUP to trigger unsharing: FAULT_FLAG_UNSHARE. FAULT_FLAG_UNSHARE is only applicable to R/O mapped anonymous pages and resembles COW logic during a write fault. However, in contrast to a write fault, GUP-triggered unsharing will, for example, still maintain the write protection. Let's implement FAULT_FLAG_UNSHARE by hooking into the existing write fault handlers for all applicable anonymous page types: ordinary pages, THP and hugetlb. * If FAULT_FLAG_UNSHARE finds a R/O-mapped anonymous page that has been marked exclusive in the meantime by someone else, there is nothing to do. * If FAULT_FLAG_UNSHARE finds a R/O-mapped anonymous page that's not marked exclusive, it will try detecting if the process is the exclusive owner. If exclusive, it can be set exclusive similar to reuse logic during write faults via page_move_anon_rmap() and there is nothing else to do; otherwise, we either have to copy and map a fresh, anonymous exclusive page R/O (ordinary pages, hugetlb), or split the THP. This commit is heavily based on patches by Andrea. Co-developed-by: Andrea Arcangeli Signed-off-by: Andrea Arcangeli Signed-off-by: David Hildenbrand --- include/linux/mm_types.h | 8 ++++ mm/huge_memory.c | 17 ++++++- mm/hugetlb.c | 56 ++++++++++++++--------- mm/memory.c | 97 +++++++++++++++++++++++++++------------- 4 files changed, 127 insertions(+), 51 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 5140e5feb486..315a8d121382 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -791,6 +791,9 @@ typedef struct { * @FAULT_FLAG_REMOTE: The fault is not for current task/mm. * @FAULT_FLAG_INSTRUCTION: The fault was during an instruction fetch. * @FAULT_FLAG_INTERRUPTIBLE: The fault can be interrupted by non-fatal signals. + * @FAULT_FLAG_UNSHARE: The fault is an unsharing request to unshare (and mark + * exclusive) a possibly shared anonymous page that is + * mapped R/O. * * About @FAULT_FLAG_ALLOW_RETRY and @FAULT_FLAG_TRIED: we can specify * whether we would allow page faults to retry by specifying these two @@ -810,6 +813,10 @@ typedef struct { * continuous faults with flags (b). We should always try to detect pending * signals before a retry to make sure the continuous page faults can still be * interrupted if necessary. + * + * The combination FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE is illegal. + * FAULT_FLAG_UNSHARE is ignored and treated like an ordinary read fault when + * no existing R/O-mapped anonymous page is encountered. */ enum fault_flag { FAULT_FLAG_WRITE = 1 << 0, @@ -822,6 +829,7 @@ enum fault_flag { FAULT_FLAG_REMOTE = 1 << 7, FAULT_FLAG_INSTRUCTION = 1 << 8, FAULT_FLAG_INTERRUPTIBLE = 1 << 9, + FAULT_FLAG_UNSHARE = 1 << 10, }; #endif /* _LINUX_MM_TYPES_H */ diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 6f2ee41cd423..2cb8fa31377a 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1271,6 +1271,7 @@ void huge_pmd_set_accessed(struct vm_fault *vmf) vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf) { + const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE; struct vm_area_struct *vma = vmf->vma; struct page *page; unsigned long haddr = vmf->address & HPAGE_PMD_MASK; @@ -1279,6 +1280,9 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf) vmf->ptl = pmd_lockptr(vma->vm_mm, vmf->pmd); VM_BUG_ON_VMA(!vma->anon_vma, vma); + VM_BUG_ON(unshare && (vmf->flags & FAULT_FLAG_WRITE)); + VM_BUG_ON(!unshare && !(vmf->flags & FAULT_FLAG_WRITE)); + if (is_huge_zero_pmd(orig_pmd)) goto fallback; @@ -1295,6 +1299,12 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf) if (PageAnonExclusive(page)) { pmd_t entry; + /* R/O and exclusive, nothing to do. */ + if (unlikely(unshare)) { + spin_unlock(vmf->ptl); + return 0; + } + entry = pmd_mkyoung(orig_pmd); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); if (pmdp_set_access_flags(vma, haddr, vmf->pmd, entry, 1)) @@ -1318,7 +1328,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf) } /* - * See do_wp_page(): we can only map the page writable if there are + * See do_wp_page(): we can only reuse the page exclusively if there are * no additional references. Note that we always drain the LRU * pagevecs immediately after adding a THP. */ @@ -1330,6 +1340,11 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf) pmd_t entry; page_move_anon_rmap(page, vma); + if (unlikely(unshare)) { + unlock_page(page); + spin_unlock(vmf->ptl); + return 0; + } entry = pmd_mkyoung(orig_pmd); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); if (pmdp_set_access_flags(vma, haddr, vmf->pmd, entry, 1)) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 8aaca8becf97..ebcde7e9f704 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5140,15 +5140,16 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma, } /* - * Hugetlb_cow() should be called with page lock of the original hugepage held. + * hugetlb_wp() should be called with page lock of the original hugepage held. * Called with hugetlb_fault_mutex_table held and pte_page locked so we * cannot race with other handlers or page migration. * Keep the pte_same checks anyway to make transition from the mutex easier. */ -static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma, - unsigned long address, pte_t *ptep, +static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int flags, struct page *pagecache_page, spinlock_t *ptl) { + const bool unshare = flags & FAULT_FLAG_UNSHARE; pte_t pte; struct hstate *h = hstate_vma(vma); struct page *old_page, *new_page; @@ -5157,17 +5158,26 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long haddr = address & huge_page_mask(h); struct mmu_notifier_range range; + VM_BUG_ON(unshare && (flags & FOLL_WRITE)); + VM_BUG_ON(!unshare && !(flags & FOLL_WRITE)); + pte = huge_ptep_get(ptep); old_page = pte_page(pte); retry_avoidcopy: - /* If no-one else is actually using this page, avoid the copy - * and just make the page writable */ + /* + * If no-one else is actually using this page, we're the exclusive + * owner and can reuse this page. + */ if (page_mapcount(old_page) == 1 && PageAnon(old_page)) { + if (unlikely(unshare) && PageAnonExclusive(old_page)) + return 0; page_move_anon_rmap(old_page, vma); - set_huge_ptep_writable(vma, haddr, ptep); + if (likely(!unshare)) + set_huge_ptep_writable(vma, haddr, ptep); return 0; } + VM_BUG_ON_PAGE(PageAnon(old_page) && PageAnonExclusive(old_page), old_page); /* * If the process that created a MAP_PRIVATE mapping is about to @@ -5266,13 +5276,13 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma, if (likely(ptep && pte_same(huge_ptep_get(ptep), pte))) { ClearHPageRestoreReserve(new_page); - /* Break COW */ + /* Break COW or unshare */ huge_ptep_clear_flush(vma, haddr, ptep); mmu_notifier_invalidate_range(mm, range.start, range.end); page_remove_rmap(old_page, true); hugepage_add_new_anon_rmap(new_page, vma, haddr); set_huge_pte_at(mm, haddr, ptep, - make_huge_pte(vma, new_page, 1)); + make_huge_pte(vma, new_page, !unshare)); SetHPageMigratable(new_page); /* Make the old page be freed below */ new_page = old_page; @@ -5280,7 +5290,10 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma, spin_unlock(ptl); mmu_notifier_invalidate_range_end(&range); out_release_all: - /* No restore in case of successful pagetable update (Break COW) */ + /* + * No restore in case of successful pagetable update (Break COW or + * unshare) + */ if (new_page != old_page) restore_reserve_on_error(h, vma, haddr, new_page); put_page(new_page); @@ -5403,7 +5416,8 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, /* * Currently, we are forced to kill the process in the event the * original mapper has unmapped pages from the child due to a failed - * COW. Warn that such a situation has occurred as it may not be obvious + * COW/unsharing. Warn that such a situation has occurred as it may not + * be obvious. */ if (is_vma_resv_set(vma, HPAGE_RESV_UNMAPPED)) { pr_warn_ratelimited("PID %d killed due to inadequate hugepage pool\n", @@ -5529,7 +5543,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, hugetlb_count_add(pages_per_huge_page(h), mm); if ((flags & FAULT_FLAG_WRITE) && !(vma->vm_flags & VM_SHARED)) { /* Optimization, do the COW without a second fault */ - ret = hugetlb_cow(mm, vma, address, ptep, page, ptl); + ret = hugetlb_wp(mm, vma, address, ptep, flags, page, ptl); } spin_unlock(ptl); @@ -5659,14 +5673,15 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, goto out_mutex; /* - * If we are going to COW the mapping later, we examine the pending - * reservations for this page now. This will ensure that any + * If we are going to COW/unshare the mapping later, we examine the + * pending reservations for this page now. This will ensure that any * allocations necessary to record that reservation occur outside the * spinlock. For private mappings, we also lookup the pagecache * page now as it is used to determine if a reservation has been * consumed. */ - if ((flags & FAULT_FLAG_WRITE) && !huge_pte_write(entry)) { + if ((flags & (FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE)) && + !huge_pte_write(entry)) { if (vma_needs_reservation(h, vma, haddr) < 0) { ret = VM_FAULT_OOM; goto out_mutex; @@ -5681,12 +5696,12 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, ptl = huge_pte_lock(h, mm, ptep); - /* Check for a racing update before calling hugetlb_cow */ + /* Check for a racing update before calling hugetlb_wp() */ if (unlikely(!pte_same(entry, huge_ptep_get(ptep)))) goto out_ptl; /* - * hugetlb_cow() requires page locks of pte_page(entry) and + * hugetlb_wp() requires page locks of pte_page(entry) and * pagecache_page, so here we need take the former one * when page != pagecache_page or !pagecache_page. */ @@ -5699,13 +5714,14 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, get_page(page); - if (flags & FAULT_FLAG_WRITE) { + if (flags & (FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE)) { if (!huge_pte_write(entry)) { - ret = hugetlb_cow(mm, vma, address, ptep, - pagecache_page, ptl); + ret = hugetlb_wp(mm, vma, address, ptep, flags, + pagecache_page, ptl); goto out_put_page; + } else if (likely(flags & FAULT_FLAG_WRITE)) { + entry = huge_pte_mkdirty(entry); } - entry = huge_pte_mkdirty(entry); } entry = pte_mkyoung(entry); if (huge_ptep_set_access_flags(vma, haddr, ptep, entry, diff --git a/mm/memory.c b/mm/memory.c index 11577ee015be..33d046c34819 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2739,8 +2739,8 @@ static inline int pte_unmap_same(struct vm_fault *vmf) return same; } -static inline bool cow_user_page(struct page *dst, struct page *src, - struct vm_fault *vmf) +static inline bool __wp_page_copy_user(struct page *dst, struct page *src, + struct vm_fault *vmf) { bool ret; void *kaddr; @@ -2968,7 +2968,8 @@ static inline void wp_page_reuse(struct vm_fault *vmf) } /* - * Handle the case of a page which we actually need to copy to a new page. + * Handle the case of a page which we actually need to copy to a new page, + * either due to COW or unsharing. * * Called with mmap_lock locked and the old page referenced, but * without the ptl held. @@ -2985,6 +2986,7 @@ static inline void wp_page_reuse(struct vm_fault *vmf) */ static vm_fault_t wp_page_copy(struct vm_fault *vmf) { + const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE; struct vm_area_struct *vma = vmf->vma; struct mm_struct *mm = vma->vm_mm; struct page *old_page = vmf->page; @@ -3007,7 +3009,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) if (!new_page) goto oom; - if (!cow_user_page(new_page, old_page, vmf)) { + if (!__wp_page_copy_user(new_page, old_page, vmf)) { /* * COW failed, if the fault was solved by other, * it's fine. If not, userspace would re-fault on @@ -3049,7 +3051,14 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte)); entry = mk_pte(new_page, vma->vm_page_prot); entry = pte_sw_mkyoung(entry); - entry = maybe_mkwrite(pte_mkdirty(entry), vma); + if (unlikely(unshare)) { + if (pte_soft_dirty(vmf->orig_pte)) + entry = pte_mksoft_dirty(entry); + if (pte_uffd_wp(vmf->orig_pte)) + entry = pte_mkuffd_wp(entry); + } else { + entry = maybe_mkwrite(pte_mkdirty(entry), vma); + } /* * Clear the pte entry and flush it first, before updating the @@ -3066,6 +3075,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) * mmu page tables (such as kvm shadow page tables), we want the * new page to be mapped directly into the secondary page table. */ + BUG_ON(unshare && pte_write(entry)); set_pte_at_notify(mm, vmf->address, vmf->pte, entry); update_mmu_cache(vma, vmf->address, vmf->pte); if (old_page) { @@ -3125,7 +3135,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) free_swap_cache(old_page); put_page(old_page); } - return page_copied ? VM_FAULT_WRITE : 0; + return page_copied && !unshare ? VM_FAULT_WRITE : 0; oom_free_new: put_page(new_page); oom: @@ -3225,18 +3235,22 @@ static vm_fault_t wp_page_shared(struct vm_fault *vmf) } /* - * This routine handles present pages, when users try to write - * to a shared page. It is done by copying the page to a new address - * and decrementing the shared-page counter for the old page. + * This routine handles present pages, when + * * users try to write to a shared page (FAULT_FLAG_WRITE) + * * GUP wants to take a R/O pin on a possibly shared anonymous page + * (FAULT_FLAG_UNSHARE) + * + * It is done by copying the page to a new address and decrementing the + * shared-page counter for the old page. * * Note that this routine assumes that the protection checks have been * done by the caller (the low-level page fault routine in most cases). - * Thus we can safely just mark it writable once we've done any necessary - * COW. + * Thus, with FAULT_FLAG_WRITE, we can safely just mark it writable once we've + * done any necessary COW. * - * We also mark the page dirty at this point even though the page will - * change only once the write actually happens. This avoids a few races, - * and potentially makes it more efficient. + * In case of FAULT_FLAG_WRITE, we also mark the page dirty at this point even + * though the page will change only once the write actually happens. This + * avoids a few races, and potentially makes it more efficient. * * We enter with non-exclusive mmap_lock (to exclude vma changes, * but allow concurrent faults), with pte both mapped and locked. @@ -3245,23 +3259,35 @@ static vm_fault_t wp_page_shared(struct vm_fault *vmf) static vm_fault_t do_wp_page(struct vm_fault *vmf) __releases(vmf->ptl) { + const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE; struct vm_area_struct *vma = vmf->vma; - if (userfaultfd_pte_wp(vma, *vmf->pte)) { - pte_unmap_unlock(vmf->pte, vmf->ptl); - return handle_userfault(vmf, VM_UFFD_WP); - } + VM_BUG_ON(unshare && (vmf->flags & FAULT_FLAG_WRITE)); + VM_BUG_ON(!unshare && !(vmf->flags & FAULT_FLAG_WRITE)); - /* - * Userfaultfd write-protect can defer flushes. Ensure the TLB - * is flushed in this case before copying. - */ - if (unlikely(userfaultfd_wp(vmf->vma) && - mm_tlb_flush_pending(vmf->vma->vm_mm))) - flush_tlb_page(vmf->vma, vmf->address); + if (likely(!unshare)) { + if (userfaultfd_pte_wp(vma, *vmf->pte)) { + pte_unmap_unlock(vmf->pte, vmf->ptl); + return handle_userfault(vmf, VM_UFFD_WP); + } + + /* + * Userfaultfd write-protect can defer flushes. Ensure the TLB + * is flushed in this case before copying. + */ + if (unlikely(userfaultfd_wp(vmf->vma) && + mm_tlb_flush_pending(vmf->vma->vm_mm))) + flush_tlb_page(vmf->vma, vmf->address); + } vmf->page = vm_normal_page(vma, vmf->address, vmf->orig_pte); if (!vmf->page) { + /* No anonymous page -> nothing to do. */ + if (unlikely(unshare)) { + pte_unmap_unlock(vmf->pte, vmf->ptl); + return 0; + } + /* * VM_MIXEDMAP !pfn_valid() case, or VM_SOFTDIRTY clear on a * VM_PFNMAP VMA. @@ -3324,8 +3350,16 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) page_move_anon_rmap(page, vma); unlock_page(page); reuse: + if (unlikely(unshare)) { + pte_unmap_unlock(vmf->pte, vmf->ptl); + return 0; + } wp_page_reuse(vmf); return VM_FAULT_WRITE; + } else if (unshare) { + /* No anonymous page -> nothing to do. */ + pte_unmap_unlock(vmf->pte, vmf->ptl); + return 0; } else if (unlikely((vma->vm_flags & (VM_WRITE|VM_SHARED)) == (VM_WRITE|VM_SHARED))) { return wp_page_shared(vmf); @@ -4506,8 +4540,11 @@ static inline vm_fault_t create_huge_pmd(struct vm_fault *vmf) /* `inline' is required to avoid gcc 4.1.2 build error */ static inline vm_fault_t wp_huge_pmd(struct vm_fault *vmf) { + const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE; + if (vma_is_anonymous(vmf->vma)) { - if (userfaultfd_huge_pmd_wp(vmf->vma, vmf->orig_pmd)) + if (unlikely(unshare) && + userfaultfd_huge_pmd_wp(vmf->vma, vmf->orig_pmd)) return handle_userfault(vmf, VM_UFFD_WP); return do_huge_pmd_wp_page(vmf); } @@ -4642,7 +4679,7 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf) update_mmu_tlb(vmf->vma, vmf->address, vmf->pte); goto unlock; } - if (vmf->flags & FAULT_FLAG_WRITE) { + if (vmf->flags & (FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE)) { if (!pte_write(entry)) return do_wp_page(vmf); entry = pte_mkdirty(entry); @@ -4685,7 +4722,6 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, .pgoff = linear_page_index(vma, address), .gfp_mask = __get_fault_gfp_mask(vma), }; - unsigned int dirty = flags & FAULT_FLAG_WRITE; struct mm_struct *mm = vma->vm_mm; pgd_t *pgd; p4d_t *p4d; @@ -4712,7 +4748,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, /* NUMA case for anonymous PUDs would go here */ - if (dirty && !pud_write(orig_pud)) { + if ((flags & FAULT_FLAG_WRITE) && !pud_write(orig_pud)) { ret = wp_huge_pud(&vmf, orig_pud); if (!(ret & VM_FAULT_FALLBACK)) return ret; @@ -4750,7 +4786,8 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, if (pmd_protnone(vmf.orig_pmd) && vma_is_accessible(vma)) return do_huge_pmd_numa_page(&vmf); - if (dirty && !pmd_write(vmf.orig_pmd)) { + if ((flags & (FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE)) && + !pmd_write(vmf.orig_pmd)) { ret = wp_huge_pmd(&vmf); if (!(ret & VM_FAULT_FALLBACK)) return ret; From patchwork Thu Feb 24 12:26:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 12758483 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F156AC433EF for ; Thu, 24 Feb 2022 12:34:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9160B8D0002; Thu, 24 Feb 2022 07:34:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 89E6E8D0001; Thu, 24 Feb 2022 07:34:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 718D48D0002; Thu, 24 Feb 2022 07:34:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0025.hostedemail.com [216.40.44.25]) by kanga.kvack.org (Postfix) with ESMTP id 5F5FF8D0001 for ; Thu, 24 Feb 2022 07:34:18 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 1B5A4181D19FE for ; Thu, 24 Feb 2022 12:34:18 +0000 (UTC) X-FDA: 79177616196.26.A81EAB9 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf14.hostedemail.com (Postfix) with ESMTP id BFED510000F for ; Thu, 24 Feb 2022 12:33:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1645706032; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JXBfwpRUT3mFFwxMydrrqxbm81IEPSW72iYTF+3QspI=; b=UOtua4597wjvoSXME0eY8SNuvO8KUji6NizvZPuJnRXf6jzy5m22FZ/iVw3tPGTKzu4T3W 0ESdFGa4uGeb5Jj1sbKSVD9lmQiWZkTFJh9SQtCl5+SUFmDKQh27s9SDEWu4p4ccAPvA8z /GOx/BRjeuMlz8l8laXLt2lgZsa4mLE= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-484-4tf5_cJqOQitqOPUXhPhCA-1; Thu, 24 Feb 2022 07:33:49 -0500 X-MC-Unique: 4tf5_cJqOQitqOPUXhPhCA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id EEC4D51D5; Thu, 24 Feb 2022 12:33:45 +0000 (UTC) Received: from t480s.redhat.com (unknown [10.39.194.160]) by smtp.corp.redhat.com (Postfix) with ESMTP id 386F92634C; Thu, 24 Feb 2022 12:33:38 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: Andrew Morton , Hugh Dickins , Linus Torvalds , David Rientjes , Shakeel Butt , John Hubbard , Jason Gunthorpe , Mike Kravetz , Mike Rapoport , Yang Shi , "Kirill A . Shutemov" , Matthew Wilcox , Vlastimil Babka , Jann Horn , Michal Hocko , Nadav Amit , Rik van Riel , Roman Gushchin , Andrea Arcangeli , Peter Xu , Donald Dutile , Christoph Hellwig , Oleg Nesterov , Jan Kara , Liang Zhang , Pedro Gomes , Oded Gabbay , linux-mm@kvack.org, David Hildenbrand Subject: [PATCH RFC 12/13] mm/gup: trigger FAULT_FLAG_UNSHARE when R/O-pinning a possibly shared anonymous page Date: Thu, 24 Feb 2022 13:26:13 +0100 Message-Id: <20220224122614.94921-13-david@redhat.com> In-Reply-To: <20220224122614.94921-1-david@redhat.com> References: <20220224122614.94921-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-Rspamd-Queue-Id: BFED510000F X-Stat-Signature: 18zb97smax6kerpxe69uz17rrydx1tff X-Rspam-User: Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=UOtua459; spf=none (imf14.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspamd-Server: rspam05 X-HE-Tag: 1645706032-447080 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Whenever GUP currently ends up taking a R/O pin on an anonymous page that might be shared -- mapped R/O and !PageAnonExclusive() -- any write fault on the page table entry will end up replacing the mapped anonymous page due to COW, resulting in the GUP pin no longer being consistent with the page actually mapped into the page table. The possible ways to deal with this situation are: (1) Ignore and pin -- what we do right now. (2) Fail to pin -- which would be rather surprising to callers and could break user space. (3) Trigger unsharing and pin the now exclusive page -- reliable R/O pins. Let's implement 3) because it provides the clearest semantics and allows for checking in unpin_user_pages() and friends for possible BUGs: when trying to unpin a page that's no longer exclusive, clearly something went very wrong and might result in memory corruptions that might be hard to debug. So we better have a nice way to spot such issues. This change implies that whenever user space *wrote* to a private mapping (IOW, we have an anonymous page mapped), that GUP pins will always remain consistent: reliable R/O GUP pins of anonymous pages. As a side note, this commit fixes the COW security issue for hugetlb with FOLL_PIN as documented in: https://lore.kernel.org/r/3ae33b08-d9ef-f846-56fb-645e3b9b4c66@redhat.com The vmsplice reproducer still applies, because vmsplice uses FOLL_GET instead of FOLL_PIN. Note that follow_huge_pmd() doesn't apply because we cannot end up in there with FOLL_PIN. This commit is heavily based on prototype patches by Andrea. Co-developed-by: Andrea Arcangeli Signed-off-by: Andrea Arcangeli Signed-off-by: David Hildenbrand --- include/linux/mm.h | 39 +++++++++++++++++++++++++++++++++++++++ mm/gup.c | 32 +++++++++++++++++++++++++++++--- mm/huge_memory.c | 3 +++ mm/hugetlb.c | 25 ++++++++++++++++++++++--- 4 files changed, 93 insertions(+), 6 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index ffeb7f5fa32b..48ce9b7f903d 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3003,6 +3003,45 @@ static inline int vm_fault_to_errno(vm_fault_t vm_fault, int foll_flags) return 0; } +/* + * Indicates for which pages that are write-protected in the page table, + * whether GUP has to trigger unsharing via FAULT_FLAG_UNSHARE such that the + * GUP pin will remain consistent with the pages mapped into the page tables + * of the MM. + * + * Temporary unmapping of PageAnonExclusive() pages or clearing of + * PageAnonExclusive() has to protect against concurrent GUP: + * * Ordinary GUP: Using the PT lock + * * GUP-fast and fork(): mm->write_protect_seq + * * GUP-fast and KSM or temporary unmapping (swap, migration): + * clear/invalidate+flush of the page table entry + * + * Must be called with the (sub)page that's actually referenced via the + * page table entry, which might not necessarily be the head page for a + * PTE-mapped THP. + */ +static inline bool gup_must_unshare(unsigned int flags, struct page *page) +{ + /* + * FOLL_WRITE is implicitly handled correctly as the page table entry + * has to be writable -- and if it references (part of) an anonymous + * folio, that part is required to be marked exclusive. + */ + if ((flags & (FOLL_WRITE | FOLL_PIN)) != FOLL_PIN) + return false; + /* + * Note: PageAnon(page) is stable until the page is actually getting + * freed. + */ + if (!PageAnon(page)) + return false; + /* + * Note that PageKsm() pages cannot be exclusive, and consequently, + * cannot get pinned. + */ + return !PageAnonExclusive(page); +} + typedef int (*pte_fn_t)(pte_t *pte, unsigned long addr, void *data); extern int apply_to_page_range(struct mm_struct *mm, unsigned long address, unsigned long size, pte_fn_t fn, void *data); diff --git a/mm/gup.c b/mm/gup.c index 7127e789d210..2cb7cbb1fc1f 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -568,6 +568,10 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, } } + if (!pte_write(pte) && gup_must_unshare(flags, page)) { + page = ERR_PTR(-EMLINK); + goto out; + } /* try_grab_page() does nothing unless FOLL_GET or FOLL_PIN is set. */ if (unlikely(!try_grab_page(page, flags))) { page = ERR_PTR(-ENOMEM); @@ -820,6 +824,11 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma, * When getting pages from ZONE_DEVICE memory, the @ctx->pgmap caches * the device's dev_pagemap metadata to avoid repeating expensive lookups. * + * When getting an anonymous page and the caller has to trigger unsharing + * of a shared anonymous page first, -EMLINK is returned. The caller should + * trigger a fault with FAULT_FLAG_UNSHARE set. Note that unsharing is only + * relevant with FOLL_PIN and !FOLL_WRITE. + * * On output, the @ctx->page_mask is set according to the size of the page. * * Return: the mapped (struct page *), %NULL if no mapping exists, or @@ -943,7 +952,8 @@ static int get_gate_page(struct mm_struct *mm, unsigned long address, * is, *@locked will be set to 0 and -EBUSY returned. */ static int faultin_page(struct vm_area_struct *vma, - unsigned long address, unsigned int *flags, int *locked) + unsigned long address, unsigned int *flags, bool unshare, + int *locked) { unsigned int fault_flags = 0; vm_fault_t ret; @@ -968,6 +978,11 @@ static int faultin_page(struct vm_area_struct *vma, */ fault_flags |= FAULT_FLAG_TRIED; } + if (unshare) { + fault_flags |= FAULT_FLAG_UNSHARE; + /* FAULT_FLAG_WRITE and FAULT_FLAG_UNSHARE are incompatible */ + VM_BUG_ON(fault_flags & FAULT_FLAG_WRITE); + } ret = handle_mm_fault(vma, address, fault_flags, NULL); if (ret & VM_FAULT_ERROR) { @@ -1189,8 +1204,9 @@ static long __get_user_pages(struct mm_struct *mm, cond_resched(); page = follow_page_mask(vma, start, foll_flags, &ctx); - if (!page) { - ret = faultin_page(vma, start, &foll_flags, locked); + if (!page || PTR_ERR(page) == -EMLINK) { + ret = faultin_page(vma, start, &foll_flags, + PTR_ERR(page) == -EMLINK, locked); switch (ret) { case 0: goto retry; @@ -2346,6 +2362,11 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, goto pte_unmap; } + if (!pte_write(pte) && gup_must_unshare(flags, page)) { + put_compound_head(head, 1, flags); + goto pte_unmap; + } + VM_BUG_ON_PAGE(compound_head(page) != head, page); /* @@ -2589,6 +2610,11 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, return 0; } + if (!pmd_write(orig) && gup_must_unshare(flags, head)) { + put_compound_head(head, refs, flags); + return 0; + } + *nr += refs; SetPageReferenced(head); return 1; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2cb8fa31377a..334cd9381635 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1396,6 +1396,9 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, page = pmd_page(*pmd); VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page); + if (!pmd_write(*pmd) && gup_must_unshare(flags, page)) + return ERR_PTR(-EMLINK); + if (!try_grab_page(page, flags)) return ERR_PTR(-ENOMEM); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index ebcde7e9f704..6c2a7dd8a48d 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5957,6 +5957,23 @@ static void record_subpages_vmas(struct page *page, struct vm_area_struct *vma, } } +static inline bool __follow_hugetlb_must_fault(unsigned int flags, pte_t *pte, + bool *unshare) +{ + pte_t pteval = huge_ptep_get(pte); + + *unshare = false; + if (is_swap_pte(pteval)) + return true; + if (huge_pte_write(pteval)) + return false; + if (flags & FOLL_WRITE) + return true; + if (gup_must_unshare(flags, pte_page(pteval))) + *unshare = true; + return *unshare; +} + long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, struct page **pages, struct vm_area_struct **vmas, unsigned long *position, unsigned long *nr_pages, @@ -5971,6 +5988,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, while (vaddr < vma->vm_end && remainder) { pte_t *pte; spinlock_t *ptl = NULL; + bool unshare; int absent; struct page *page; @@ -6021,9 +6039,8 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, * both cases, and because we can't follow correct pages * directly from any kind of swap entries. */ - if (absent || is_swap_pte(huge_ptep_get(pte)) || - ((flags & FOLL_WRITE) && - !huge_pte_write(huge_ptep_get(pte)))) { + if (absent || + __follow_hugetlb_must_fault(flags, pte, &unshare)) { vm_fault_t ret; unsigned int fault_flags = 0; @@ -6031,6 +6048,8 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, spin_unlock(ptl); if (flags & FOLL_WRITE) fault_flags |= FAULT_FLAG_WRITE; + else if (unshare) + fault_flags |= FAULT_FLAG_UNSHARE; if (locked) fault_flags |= FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE; From patchwork Thu Feb 24 12:26:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 12758482 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8675EC433EF for ; Thu, 24 Feb 2022 12:34:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 238488D0006; Thu, 24 Feb 2022 07:34:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1E7758D0001; Thu, 24 Feb 2022 07:34:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0B0168D0006; Thu, 24 Feb 2022 07:34:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0202.hostedemail.com [216.40.44.202]) by kanga.kvack.org (Postfix) with ESMTP id F28918D0001 for ; Thu, 24 Feb 2022 07:34:01 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id B2891180949F2 for ; Thu, 24 Feb 2022 12:34:01 +0000 (UTC) X-FDA: 79177615482.27.1D28F9D Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf22.hostedemail.com (Postfix) with ESMTP id 2F0A6C0008 for ; Thu, 24 Feb 2022 12:34:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1645706040; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5clP4FClpkVME0Divw2JnoZ6h6QaqdSaw66V9RpG2ik=; b=WqP1wWrhpS4ST0u0iu944wSey/UPWPxptKofzDzojUKxB6LJI+5hSy4D2nX/KRJLxTRgs3 IjYF4vmBVAgdvwuB7AcWbgv1nluZ7bpbDQl2zMFrYmF5jg6DMrM9M5eJ2/5fOnbOofL0KN xR1AXs+8gL375z9QwPjUs0FcqSHVinE= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-625-CvylO8yJNnax_zERPdNlwA-1; Thu, 24 Feb 2022 07:33:55 -0500 X-MC-Unique: CvylO8yJNnax_zERPdNlwA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id DC0911006AA5; Thu, 24 Feb 2022 12:33:52 +0000 (UTC) Received: from t480s.redhat.com (unknown [10.39.194.160]) by smtp.corp.redhat.com (Postfix) with ESMTP id 51D882634C; Thu, 24 Feb 2022 12:33:46 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: Andrew Morton , Hugh Dickins , Linus Torvalds , David Rientjes , Shakeel Butt , John Hubbard , Jason Gunthorpe , Mike Kravetz , Mike Rapoport , Yang Shi , "Kirill A . Shutemov" , Matthew Wilcox , Vlastimil Babka , Jann Horn , Michal Hocko , Nadav Amit , Rik van Riel , Roman Gushchin , Andrea Arcangeli , Peter Xu , Donald Dutile , Christoph Hellwig , Oleg Nesterov , Jan Kara , Liang Zhang , Pedro Gomes , Oded Gabbay , linux-mm@kvack.org, David Hildenbrand Subject: [PATCH RFC 13/13] mm/gup: sanity-check with CONFIG_DEBUG_VM that anonymous pages are exclusive when (un)pinning Date: Thu, 24 Feb 2022 13:26:14 +0100 Message-Id: <20220224122614.94921-14-david@redhat.com> In-Reply-To: <20220224122614.94921-1-david@redhat.com> References: <20220224122614.94921-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-Rspamd-Queue-Id: 2F0A6C0008 X-Stat-Signature: ct3n3fi4k1jfoqstii1ub4g18na3mwck X-Rspam-User: Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=WqP1wWrh; spf=none (imf22.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.129.124) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspamd-Server: rspam05 X-HE-Tag: 1645706041-213500 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Let's verify when (un)pinning anonymous pages that we always deal with exclusive anonymous pages, which guarantees that we'll have a reliable PIN, meaning that we cannot end up with the GUP pin being inconsistent with he pages mapped into the page tables due to a COW triggered by a write fault. When pinning pages, after conditionally triggering GUP unsharing of possibly shared anonymous pages, we should always only see exclusive anonymous pages. Note that anonymous pages that are mapped writable must be marked exclusive, otherwise we'd have a BUG. When pinning during ordinary GUP, simply add a check after our conditional GUP-triggered unsharing checks. As we know exactly how the page is mapped, we know exactly in which page we have to check for PageAnonExclusive(). When pinning via GUP-fast we have to be careful, because we can race with fork(): verify only after we made sure via the seqcount that we didn't race with concurrent fork() that we didn't end up pinning a possibly shared anonymous page. Similarly, when unpinning, verify that the pages are still marked as exclusive: otherwise something turned the pages possibly shared, which can result in random memory corruptions, which we really want to catch. With only the pinned pages at hand and not the actual page table entries we have to be a bit careful: hugetlb pages are always mapped via a single logical page table entry referencing the head page and PG_anon_exclusive of the head page applies. Anon THP are a bit more complicated, because we might have obtained the page reference either via a PMD or a PTE -- depending on the mapping type we either have to check PageAnonExclusive of the head page (PMD-mapped THP) or the tail page (PTE-mapped THP) applies: as we don't know and to make our life easier, check that either is set. Take care to not verify in case we're unpinning during GUP-fast because we detected concurrent fork(): we might stumble over an anonymous page that is now shared. Signed-off-by: David Hildenbrand --- mm/gup.c | 61 +++++++++++++++++++++++++++++++++++++++++++++++- mm/huge_memory.c | 3 +++ mm/hugetlb.c | 3 +++ 3 files changed, 66 insertions(+), 1 deletion(-) diff --git a/mm/gup.c b/mm/gup.c index 2cb7cbb1fc1f..df1ae29a1f5f 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -45,6 +45,41 @@ static void hpage_pincount_sub(struct page *page, int refs) atomic_sub(refs, compound_pincount_ptr(page)); } +static inline void sanity_check_pinned_pages(struct page **pages, + unsigned long npages) +{ +#ifdef CONFIG_DEBUG_VM + /* + * We expect to only succeed in pinning anonymous pages if they are + * exclusive. Once pinned, we can no longer mark them shared and the + * PageAnonExclusive() will stick around until the page is freed + * after it was unmapped. + * + * We'd like to verify that our pinned anonymous pages are mapped + * exclusively. The issue with anon THP is that we don't know how + * they are/were mapped when pinning them. However, for anon + * THP we can assume that either the given page (PTE-mapped THP) or + * the head page (PMD-mapped THP) should be PageAnonExclusive(). If + * neither is the case there is certainly something wrong. + */ + for (; npages; npages--, pages++) { + struct page *page = *pages; + struct page *head = compound_head(page); + + if (!PageAnon(page)) + continue; + if (!PageCompound(page)) + VM_BUG_ON_PAGE(!PageAnonExclusive(page), page); + else if (PageHuge(page)) + VM_BUG_ON_PAGE(!PageAnonExclusive(head), page); + else + /* Either PTE-mapped or PMD-mapped, we don't know. */ + VM_BUG_ON_PAGE(!PageAnonExclusive(head) && + !PageAnonExclusive(page), page); + } +#endif /* CONFIG_DEBUG_VM */ +} + /* Equivalent to calling put_page() @refs times. */ static void put_page_refs(struct page *page, int refs) { @@ -250,6 +285,7 @@ bool __must_check try_grab_page(struct page *page, unsigned int flags) */ void unpin_user_page(struct page *page) { + sanity_check_pinned_pages(&page, 1); put_compound_head(compound_head(page), 1, FOLL_PIN); } EXPORT_SYMBOL(unpin_user_page); @@ -340,6 +376,7 @@ void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages, return; } + sanity_check_pinned_pages(pages, npages); for_each_compound_head(index, pages, npages, head, ntails) { /* * Checking PageDirty at this point may race with @@ -404,6 +441,21 @@ void unpin_user_page_range_dirty_lock(struct page *page, unsigned long npages, } EXPORT_SYMBOL(unpin_user_page_range_dirty_lock); +static void unpin_user_pages_lockless(struct page **pages, unsigned long npages) +{ + unsigned long index; + struct page *head; + unsigned int ntails; + + /* + * Don't perform any sanity checks because we might have raced with + * fork() and some anonymous pages might no actually be shared -- + * which is why we're unpinning after all. + */ + for_each_compound_head(index, pages, npages, head, ntails) + put_compound_head(head, ntails, FOLL_PIN); +} + /** * unpin_user_pages() - release an array of gup-pinned pages. * @pages: array of pages to be marked dirty and released. @@ -426,6 +478,7 @@ void unpin_user_pages(struct page **pages, unsigned long npages) */ if (WARN_ON(IS_ERR_VALUE(npages))) return; + sanity_check_pinned_pages(pages, npages); for_each_compound_head(index, pages, npages, head, ntails) put_compound_head(head, ntails, FOLL_PIN); @@ -572,6 +625,10 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, page = ERR_PTR(-EMLINK); goto out; } + + VM_BUG_ON((flags & FOLL_PIN) && PageAnon(page) && + !PageAnonExclusive(page)); + /* try_grab_page() does nothing unless FOLL_GET or FOLL_PIN is set. */ if (unlikely(!try_grab_page(page, flags))) { page = ERR_PTR(-ENOMEM); @@ -2885,8 +2942,10 @@ static unsigned long lockless_pages_from_mm(unsigned long start, */ if (gup_flags & FOLL_PIN) { if (read_seqcount_retry(¤t->mm->write_protect_seq, seq)) { - unpin_user_pages(pages, nr_pinned); + unpin_user_pages_lockless(pages, nr_pinned); return 0; + } else { + sanity_check_pinned_pages(pages, nr_pinned); } } return nr_pinned; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 334cd9381635..72ed42d0ef3c 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1399,6 +1399,9 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, if (!pmd_write(*pmd) && gup_must_unshare(flags, page)) return ERR_PTR(-EMLINK); + VM_BUG_ON((flags & FOLL_PIN) && PageAnon(page) && + !PageAnonExclusive(page)); + if (!try_grab_page(page, flags)) return ERR_PTR(-ENOMEM); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6c2a7dd8a48d..e1f7fa2b0292 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6091,6 +6091,9 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, pfn_offset = (vaddr & ~huge_page_mask(h)) >> PAGE_SHIFT; page = pte_page(huge_ptep_get(pte)); + VM_BUG_ON((flags & FOLL_PIN) && PageAnon(page) && + !PageAnonExclusive(page)); + /* * If subpage information not requested, update counters * and skip the same_page loop below.