From patchwork Fri Nov 24 13:26:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13467665 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D28ABC624B4 for ; Fri, 24 Nov 2023 13:27:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 659178D0080; Fri, 24 Nov 2023 08:27:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5E1F18D006E; Fri, 24 Nov 2023 08:27:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 436428D0080; Fri, 24 Nov 2023 08:27:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 2B8DA8D006E for ; Fri, 24 Nov 2023 08:27:24 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id EAE0712015D for ; Fri, 24 Nov 2023 13:27:23 +0000 (UTC) X-FDA: 81492924366.26.9E70E4C Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf15.hostedemail.com (Postfix) with ESMTP id 3681DA001B for ; Fri, 24 Nov 2023 13:27:21 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=PvrINXIi; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf15.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700832442; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=XtSW4MwxrPZp0IyU2r08B+zJ2gAs9ZHqKJrFZADgzdE=; b=o6M7rlOlrZpB8SrQKuXsThDWA0kbbaJNEY9obiWwPmJE/yFI9mPMPX2qZovvNoTNT4Dzuj IJiy4kRY7Jg2i49Yd3+kpJvnLLmU+r/YFTkgY6HtIFvhRa8Mpz1UGT2tQEl3CM66zPUqv6 +BvbFONVUNZUr3q1PlkINzYydeC13fM= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=PvrINXIi; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf15.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700832442; a=rsa-sha256; cv=none; b=sesr/wiZrdUQX8Z4P/4lCs3IUXvbX71yDnkJOkGHPbtuc9YYUiaXmu2PtTpf+E97DZKQUw ioa/tNBH9rG3ZCORQ0qmOs37TXa7TldBFIWpo0d0mpULCpeHS8whs6Tmm1OoDPoCv0hHZM Zhf0eV/5dBJiYDh/urCeSZizQ6pK4Zg= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1700832441; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XtSW4MwxrPZp0IyU2r08B+zJ2gAs9ZHqKJrFZADgzdE=; b=PvrINXIi+HkZ7i12u9n/LQMKb8TwExu7SJXhWFy/IQ2YsFwegLyOqW6/ewE6EpYd/A5da7 UuKPW8eUmHy1/qqpqsTos9jgYEaoMHI5Y/ex6bEjAZGeMnH/vs9wDGrAFNoKXsyFi6pvAa O8YZKOyFQVRwnWUh1f2pSor63+lEJZs= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-624-S2Vde6e2MxqrSRyf_jNx7w-1; Fri, 24 Nov 2023 08:27:15 -0500 X-MC-Unique: S2Vde6e2MxqrSRyf_jNx7w-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B241E185A784; Fri, 24 Nov 2023 13:27:14 +0000 (UTC) Received: from t14s.fritz.box (unknown [10.39.194.71]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3636A2166B2A; Fri, 24 Nov 2023 13:27:11 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , Linus Torvalds , Ryan Roberts , Matthew Wilcox , Hugh Dickins , Yin Fengwei , Yang Shi , Ying Huang , Zi Yan , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , "Paul E. McKenney" Subject: [PATCH WIP v1 12/20] mm/rmap: introduce folio_add_anon_rmap_range() Date: Fri, 24 Nov 2023 14:26:17 +0100 Message-ID: <20231124132626.235350-13-david@redhat.com> In-Reply-To: <20231124132626.235350-1-david@redhat.com> References: <20231124132626.235350-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.6 X-Rspamd-Queue-Id: 3681DA001B X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: pjhn14gsiooe67s6zd1xbkjz7bunik5t X-HE-Tag: 1700832441-938669 X-HE-Meta: U2FsdGVkX1/J1sIodycpTSU1usgUmJW7EKnN7PdX8DVR6IPoskwBg+wLEgF+NZFnCrEWBiQnmQ+SXXCv9qMwGVyPJR9B1JXG6y4MhPGiTr2KSz6g1cuYA9vZXZd0Nk5qUQu7y2aNeU8Tdbu5e0PFF93jy6dKJcSee+FfdttQ3WqEh0zIrEDtfCzZMt809nCPWM/5gwsrPkwJyzvrQ065WOEnXnxJrK40/9Uiijl0Fmebwtp0T5N52qneJOi5XkwkDIN57UbBrW/F+I139h+2hJNbVIaNtcPfKy7F0Anq6yvV3c+F08cexLpkpbr2JG65PEeI/RlAdD6VrMDlXK2nkX1VmgPqVwY7REfOjqs+E7/S64GNxGDBvfRovrAeMC7d1uv5VMDh8b4In4KduTUGmKwGgGCCImCP6ltZoAeeI32DwOmOKb5F7J6d/BWS72h4XmZJHITKGpCB5m/UyM+ARKK0LrthKdWfrsZnsAcQ0WLP+HnGVdO21qDDXnF3ThnhLanIA4JMJiKsFHOk5IV8iepOWGVGQcDK5owE1c4McE7ds48yO8mS5BdclvvkR4fe7MMD8JjtaI2n6TQLFP3n0gunxGFCIKPuK+nJ4g0uO7XxcOTtHKVw9s43ah6X8le5sHJWcTxiYDsQ1YHAAYraW4lFr+C+mbKDNeX34CnrngVm2jk+BxMHiZTiASRum3lbu/1lctUbLk/v6BawJKEQTiPiTNm++SF8b/982mlzGbq0k+eKAj6HmUsd9dTwOxF9KqbMQ7c0k50r5T+hoFNNqsdgP5uDgweh39dLUqLy/xPzN4I4jGDM1cUBEuWnV2ojITiYK/lfrrcsO+oP7T6JehGcMavuurlvUjk7z1lOumxCbZC5yF2HsuUf48XPmMQL2EzVxnCbDj1/dyfkLQNqvDnYIvHNamZCui7bJKaD+5EQ6OZ8o3GG2tns3srzWkRi7ZDMIiPCAKQW/GDfxsC R8xNqrF4 O8pNXV46XNy0Ikc9iq6n36i0YbiLA+blZ4EW+UAu/cirwSPhVNwojkTCjaufOCk23MSfs6Y3+g4Ffe8j0pPy7gXTFXQUO+CM0tRrjhVdwojdR6KDt1uRdffP3EovfYg8ssmUNhHZa4xXVH9zBgiMVYRfflXREUi5ylvZHrlwm8tYch8KE/w6h05o131BGD93hOLztfTRCDQjqWX1TxZyZpeGRT4TlqbG6EuqA X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There are probably ways to have an even cleaner interface (e.g., pass the mapping granularity instead of "compound"). For now, let's handle it like folio_add_file_rmap_range(). Use separate loops for handling the "SetPageAnonExclusive()" case and performing debug checks. The latter should get optimized out automatically without CONFIG_DEBUG_VM. We'll use this function to batch rmap operations when PTE-remapping a PMD-mapped THP next. Signed-off-by: David Hildenbrand --- include/linux/rmap.h | 3 ++ mm/rmap.c | 69 +++++++++++++++++++++++++++++++++----------- 2 files changed, 55 insertions(+), 17 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 39aeab457f4a..76e6fb1dad5c 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -393,6 +393,9 @@ typedef int __bitwise rmap_t; * rmap interfaces called when adding or removing pte of page */ void folio_move_anon_rmap(struct folio *, struct vm_area_struct *); +void folio_add_anon_rmap_range(struct folio *, struct page *, + unsigned int nr_pages, struct vm_area_struct *, + unsigned long address, rmap_t flags); void page_add_anon_rmap(struct page *, struct vm_area_struct *, unsigned long address, rmap_t flags); void page_add_new_anon_rmap(struct page *, struct vm_area_struct *, diff --git a/mm/rmap.c b/mm/rmap.c index 689ad85cf87e..da7fa46a18fc 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1240,25 +1240,29 @@ static void __page_check_anon_rmap(struct folio *folio, struct page *page, } /** - * page_add_anon_rmap - add pte mapping to an anonymous page - * @page: the page to add the mapping to - * @vma: the vm area in which the mapping is added - * @address: the user virtual address mapped - * @flags: the rmap flags + * folio_add_anon_rmap_range - add mappings to a page range of an anon folio + * @folio: The folio to add the mapping to + * @page: The first page to add + * @nr_pages: The number of pages which will be mapped + * @vma: The vm area in which the mapping is added + * @address: The user virtual address of the first page to map + * @flags: The rmap flags + * + * The page range of folio is defined by [first_page, first_page + nr_pages) * * The caller needs to hold the pte lock, and the page must be locked in * the anon_vma case: to serialize mapping,index checking after setting, - * and to ensure that PageAnon is not being upgraded racily to PageKsm - * (but PageKsm is never downgraded to PageAnon). + * and to ensure that an anon folio is not being upgraded racily to a KSM folio + * (but KSM folios are never downgraded). */ -void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma, +void folio_add_anon_rmap_range(struct folio *folio, struct page *page, + unsigned int nr_pages, struct vm_area_struct *vma, unsigned long address, rmap_t flags) { - struct folio *folio = page_folio(page); - unsigned int nr, nr_pmdmapped = 0; + unsigned int i, nr, nr_pmdmapped = 0; bool compound = flags & RMAP_COMPOUND; - nr = __folio_add_rmap_range(folio, page, 1, vma, compound, + nr = __folio_add_rmap_range(folio, page, nr_pages, vma, compound, &nr_pmdmapped); if (nr_pmdmapped) __lruvec_stat_mod_folio(folio, NR_ANON_THPS, nr_pmdmapped); @@ -1279,12 +1283,20 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma, } else if (likely(!folio_test_ksm(folio))) { __page_check_anon_rmap(folio, page, vma, address); } - if (flags & RMAP_EXCLUSIVE) - SetPageAnonExclusive(page); - /* While PTE-mapping a THP we have a PMD and a PTE mapping. */ - VM_WARN_ON_FOLIO((atomic_read(&page->_mapcount) > 0 || - (folio_test_large(folio) && folio_entire_mapcount(folio) > 1)) && - PageAnonExclusive(page), folio); + + if (flags & RMAP_EXCLUSIVE) { + for (i = 0; i < nr_pages; i++) + SetPageAnonExclusive(page + i); + } + for (i = 0; i < nr_pages; i++) { + struct page *cur_page = page + i; + + /* While PTE-mapping a THP we have a PMD and a PTE mapping. */ + VM_WARN_ON_FOLIO((atomic_read(&cur_page->_mapcount) > 0 || + (folio_test_large(folio) && + folio_entire_mapcount(folio) > 1)) && + PageAnonExclusive(cur_page), folio); + } /* * For large folio, only mlock it if it's fully mapped to VMA. It's @@ -1296,6 +1308,29 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma, mlock_vma_folio(folio, vma); } +/** + * page_add_anon_rmap - add mappings to an anonymous page + * @page: The page to add the mapping to + * @vma: The vm area in which the mapping is added + * @address: The user virtual address of the page to map + * @flags: The rmap flags + * + * See folio_add_anon_rmap_range(). + */ +void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma, + unsigned long address, rmap_t flags) +{ + struct folio *folio = page_folio(page); + unsigned int nr_pages; + + if (likely(!(flags & RMAP_COMPOUND))) + nr_pages = 1; + else + nr_pages = folio_nr_pages(folio); + + folio_add_anon_rmap_range(folio, page, nr_pages, vma, address, flags); +} + /** * folio_add_new_anon_rmap - Add mapping to a new anonymous folio. * @folio: The folio to add the mapping to.