From patchwork Mon Dec 23 09:40:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13918650 Received: from mail-pg1-f182.google.com (mail-pg1-f182.google.com [209.85.215.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6AEBE192D91 for ; Mon, 23 Dec 2024 09:43:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947000; cv=none; b=k4rUKBUsGb/3xxsR3kngbIESW0yUIKYxysBe0oIRwP18YvFdowIZjOZRSO+x5yyh85B3vEOJEdd73ZAHvEo9p/+tUV0eSKbPkDF7V5WP2tv/qg7+XyTl/NyjxQrokXyjHeHIZRMKOqN3OPhTwg4iF45dfOiai/TQU2Xjquw2+Jw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947000; c=relaxed/simple; bh=eSQ1cqjI0k4SQFzb/dlIm3hCd3v5CZjZJpupq9IDdtI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=q9l8+ovXt7GFJo1hsENEkjAAezRZrph2geMRJlWMiTI3vPt6uhYd3nwIBC1Ot6FDVf6DWCbTOliS6tPfLeqaeArptEHokzMZPMM5l1IRQdCuv/NGhZYdspDPMz2rXG7KNyj2iqvF99LLlBqkKCr8LCdirgO5mpZDrIHqUMmiEOk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=Ht+5ilzY; arc=none smtp.client-ip=209.85.215.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="Ht+5ilzY" Received: by mail-pg1-f182.google.com with SMTP id 41be03b00d2f7-8019338c2b2so2381103a12.3 for ; Mon, 23 Dec 2024 01:43:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1734946997; x=1735551797; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SyzxI6H4ZAISLyWZ1lhQ49SW3wSD1SDR6GIh5AHA3wQ=; b=Ht+5ilzY8Y8XYLiW9Ff0M8w9C2lLtWgJAwin0q+DBGggZW9G8x1NeFvrT/DmYzaP+s V8FxEkne/FTfG96izndxnklHmrQpys3F8suSdWDww+RgViN+fX1c3x3Rvx+hy18rFHRP hFBZD+UIP+jCH9ra+z6AVokYgAUUePp/+G2cuUFsrqxPYsEXDFWjHoIzUktjv8G5ZPsd BMNsPCOBLoV5KtXFLs6pxcAbbdEiatpbkDk2k/6kERUHvpCHIuzXQrEzCXxhuMWt+vm7 LMeKU7bPIJ7SV0VDUfI3A8h6v7wScr/YWW51ipUwNNfD9r2yDFi0LRCNC0IdqacYpltN G5YQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734946997; x=1735551797; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SyzxI6H4ZAISLyWZ1lhQ49SW3wSD1SDR6GIh5AHA3wQ=; b=xAfPWIlvo+DIYcMZIe7iD5/3+ojbDk1hKQD3sWLuRSubPEnPxVtkbWvCgT6Pii06sr PQDmylgJA4qS16TVirXnxKaQKBIdHq7WrHh8lu7WLT2Scv8If0O2Yqn4Ijp5h5kyjP8a IDAJjK7e/KuOZyGVBQ9PfvTFPuKs6ypB3gYm41iwsyToTbMF175B1vkeWGp1ScSCLC/c uWOJNeoUnx1zZzFnnq4Oke8UYQ7ce0zAMfndv52HvazIJqwM0c15BpDjp88HCCVY/SUc TIl+ied58zmUl98FtVfCyFzsVtsCLIVr9MywTUMhwp7VQBPaG1Bv5YKCnn4gTB4iMUiE 8p9g== X-Forwarded-Encrypted: i=1; AJvYcCU0PqHnPplhti4xgS7XxSr9FhOpG/rJpD1Hceg/xQwSsrl4wHF1GxU+erwcEF0YWCdUfr/hGbEjqg==@vger.kernel.org X-Gm-Message-State: AOJu0YxVJhts/1z4Zz1JvWApC/gYewjveyI24TwSaTpgh3wg66KbbKKS cL4GedFRegaWdUQj9pBdsnTKLZivoPDBckd08lkxg9AtiRij5+HXjDhHBra4iSo= X-Gm-Gg: ASbGncvK9wbnxirqJTbmluJjEOmaKRILywusdionq0xR3w5Au1kPl3+R59kGhp0zPHw 0ZVX7TMN13aPMteSvinnjkv6Uj9ZbOvxMjWPzBYMzTJRgFuBt1jwelLBXOJLPwB3OPvYhbHfEo1 IUU+FOYgkXSuxG0VXf5E8vl30SwwFH5xwMzuXrsJ59kjWK0nhNFXF5iu/wx7WXdwjj1pg2MBa1K 3Vk7Vt1WUfZ7OsbATXtRYEpIxfVLDjQVvYHDDCuQ/vsQcMm9uA4sH22pjzrx6sIQXsgkCSMIutu HMbEmoFYc9YWfS+wShB7Uw== X-Google-Smtp-Source: AGHT+IETS5CqVMOZwuklYBKyw2d6q7hBKdfBjuJ9ZQ+tZcvMnTE5pZk49SgpvcM01OaAXbZV7hq4vg== X-Received: by 2002:a05:6a21:6da9:b0:1db:e338:ab0a with SMTP id adf61e73a8af0-1e5e0447f5bmr22664720637.8.1734946996090; Mon, 23 Dec 2024 01:43:16 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72aad8486c6sm7468309b3a.85.2024.12.23.01.43.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Dec 2024 01:43:15 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v3 01/17] Revert "mm: pgtable: make ptlock be freed by RCU" Date: Mon, 23 Dec 2024 17:40:47 +0800 Message-Id: X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This reverts commit 2f3443770437e49abc39af26962d293851cbab6d. Signed-off-by: Qi Zheng --- include/linux/mm.h | 2 +- include/linux/mm_types.h | 9 +-------- mm/memory.c | 22 ++++++---------------- 3 files changed, 8 insertions(+), 25 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index d61b9c7a3a7b0..c49bc7b764535 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2925,7 +2925,7 @@ void ptlock_free(struct ptdesc *ptdesc); static inline spinlock_t *ptlock_ptr(struct ptdesc *ptdesc) { - return &(ptdesc->ptl->ptl); + return ptdesc->ptl; } #else /* ALLOC_SPLIT_PTLOCKS */ static inline void ptlock_cache_init(void) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 90ab8293d714a..6b27db7f94963 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -434,13 +434,6 @@ FOLIO_MATCH(flags, _flags_2a); FOLIO_MATCH(compound_head, _head_2a); #undef FOLIO_MATCH -#if ALLOC_SPLIT_PTLOCKS -struct pt_lock { - spinlock_t ptl; - struct rcu_head rcu; -}; -#endif - /** * struct ptdesc - Memory descriptor for page tables. * @__page_flags: Same as page flags. Powerpc only. @@ -489,7 +482,7 @@ struct ptdesc { union { unsigned long _pt_pad_2; #if ALLOC_SPLIT_PTLOCKS - struct pt_lock *ptl; + spinlock_t *ptl; #else spinlock_t ptl; #endif diff --git a/mm/memory.c b/mm/memory.c index b9b05c3f93f11..9423967b24180 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -7034,34 +7034,24 @@ static struct kmem_cache *page_ptl_cachep; void __init ptlock_cache_init(void) { - page_ptl_cachep = kmem_cache_create("page->ptl", sizeof(struct pt_lock), 0, + page_ptl_cachep = kmem_cache_create("page->ptl", sizeof(spinlock_t), 0, SLAB_PANIC, NULL); } bool ptlock_alloc(struct ptdesc *ptdesc) { - struct pt_lock *pt_lock; + spinlock_t *ptl; - pt_lock = kmem_cache_alloc(page_ptl_cachep, GFP_KERNEL); - if (!pt_lock) + ptl = kmem_cache_alloc(page_ptl_cachep, GFP_KERNEL); + if (!ptl) return false; - ptdesc->ptl = pt_lock; + ptdesc->ptl = ptl; return true; } -static void ptlock_free_rcu(struct rcu_head *head) -{ - struct pt_lock *pt_lock; - - pt_lock = container_of(head, struct pt_lock, rcu); - kmem_cache_free(page_ptl_cachep, pt_lock); -} - void ptlock_free(struct ptdesc *ptdesc) { - struct pt_lock *pt_lock = ptdesc->ptl; - - call_rcu(&pt_lock->rcu, ptlock_free_rcu); + kmem_cache_free(page_ptl_cachep, ptdesc->ptl); } #endif From patchwork Mon Dec 23 09:40:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13918651 Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 52503192D87 for ; Mon, 23 Dec 2024 09:43:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947012; cv=none; b=lZ04nJgJQ5YmYvBqxB7kjepxgr82YStc/YNwgwTkWfG/s56TgSkV6vMOr9z29ZsdJW2/G699GLN9Ueyk2NLY1mP4bImuiL7exzRhD/AoOvr7H9DgEIk6D5vgm4iqhz9zHdc8j/0qyQziNfCMTXvY/pyy4Lff6gTuUbC7TnXVjtY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947012; c=relaxed/simple; bh=ju5DuzkKGwFmMyIwrlT1Lgl5SSQbM6pKt1IYE0Qmp1M=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=tT+uEHEbt1AYmS8aOKREh77GF2rjxU0UFRcFby8Ycws9srpfaXlE3JhGJ0X6nPo0a+IFAF6eOXH9EQLKM7uK4ijE8BMXIaD6EhcA7q8tee1Tq9wWiGXUH0mf+/N9OytQ4rIoYrRSsIXIgASr00oPDwPUqVSoRotBvPr9Z3sNDHU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=ikFqkr6B; arc=none smtp.client-ip=209.85.210.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="ikFqkr6B" Received: by mail-pf1-f176.google.com with SMTP id d2e1a72fcca58-725f4623df7so3743656b3a.2 for ; Mon, 23 Dec 2024 01:43:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1734947009; x=1735551809; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=B/tJ15+vmer/NbqXoUznjybhQp/MgC6GOdT/GuQUXpM=; b=ikFqkr6B0/3PS1wO+3Ey7F7j57aIAc5gjLbqDwTohFJrbACx++AluUPiHxN9uHBSQC IVPuuF6KnKEYEiKr/yQHxcdloQB5DmS29AvyvLbqeNU3NHLyX3a9I+gZbR15sRRD11He 3WwSXngasVjY/nlqAjPJjwQfmR0c2XGTXEKlcXALQ5kf8ED99PA2Ayoi/O3MhrlmfPNn 3D8gWkdGG66Ol9dqQ0sbt4gH8vLMHxGLBKAh+ZQlp5ldZC91qPk058gI+7dKBfDn/WE0 E73kQOlr4lyiks0g4/4kKzB3pYIYnBkZGpw6tt/3pr2uNYDQk51eeBQEn8dRlp1a5OqM YufA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734947009; x=1735551809; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=B/tJ15+vmer/NbqXoUznjybhQp/MgC6GOdT/GuQUXpM=; b=DMiaLqVreTcn9OiZp3/EmjjIz1jnuF6qKmBtkPqhVOLPPGSYanT0eZjUbbhWDHJ0Gt vj0EFypvMhTSqJmaE4B4y81aVrK2xSkM8kTEp+oXsbQjorMr2F06BSuDDxqANQ6HP7fh reIjMqjyCiWDPvFtTvd0qFt+pJDLLEM0Ea22qIGX31g1s0qFYp0zsrU7/rHKFDCsALWp VRWNyaZUI7Xr5iOSxBgfzxZXMAuSHOkx9RSb/gmaz6XjheqXX2C1qRdPwqHd4a+Ox4q6 16q101TVxkjhaFHsBtZ5RAT8Uy7F6lobfjK93JBUrgBrjOqBXjw9INemtMPvVrTee7vk yNYw== X-Forwarded-Encrypted: i=1; AJvYcCWF5gbrz9rGlUpo0+e+XrtvFv/53ydNR5HJdc8FtY3qDRh+ObMeAr+tJBkMpGXzDhzictAvOg+i8A==@vger.kernel.org X-Gm-Message-State: AOJu0YzmodNTUUTuOAaf5AJwFJC1VIyMM1a35JINyGXV4slrYVTLmi1a TavTjRDjcbvypKBkZU/HiSx294UDPIIKbH9KuQUKapB2TvERauvzgvcnooOfdIQ= X-Gm-Gg: ASbGncsmioqzw0Kk8vQ/9hpJkbVgoPX9l1LWGXaTV5nWEoqnb46hxRmMOIh655vJZOm nsAPWqk9LdpVb2H7zdGo4bzBLKkQAQTW3I5pgUkrLRjjX64DShNUtF/g04iyC9TUuC/UZee2gV8 p46ANaoD/+aB1BdVOzt+wT5fCPWngBwF5jZXE3BWJpghDjgru19V4l1S6VU+2P+TS/QMb9ZjLbH inUWHa+l/evfmLc+xGxZUKYgUSavIFGYjxNSzd4M6cGR1LANJ09Y6CpkOBDbn6HQynyqVnlC/+D IS644jUtc3DwYOhZTH89PA== X-Google-Smtp-Source: AGHT+IGn0WgNlH4n6jFU+xV15Lu8l6dnMV2FFWuW4zvXTMVJTCCaTT+ZxewqFKpQcJgKvmYU0RJrVw== X-Received: by 2002:a05:6a00:35ca:b0:724:f86e:e3d9 with SMTP id d2e1a72fcca58-72abdecbdb4mr16058641b3a.14.1734947008718; Mon, 23 Dec 2024 01:43:28 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72aad8486c6sm7468309b3a.85.2024.12.23.01.43.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Dec 2024 01:43:28 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v3 02/17] riscv: mm: Skip pgtable level check in {pud,p4d}_alloc_one Date: Mon, 23 Dec 2024 17:40:48 +0800 Message-Id: <84ddf857508b98a195a790bc6ff6ab8849b44633.1734945104.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Kevin Brodsky {pmd,pud,p4d}_alloc_one() is never called if the corresponding page table level is folded, as {pmd,pud,p4d}_alloc() already does the required check. We can therefore remove the runtime page table level checks in {pud,p4d}_alloc_one. The PUD helper becomes equivalent to the generic version, so we remove it altogether. This is consistent with the way arm64 and x86 handle this situation (runtime check in p4d_free() only). Signed-off-by: Kevin Brodsky Acked-by: Dave Hansen Signed-off-by: Qi Zheng Acked-by: Palmer Dabbelt --- arch/riscv/include/asm/pgalloc.h | 22 ++++------------------ 1 file changed, 4 insertions(+), 18 deletions(-) diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h index f52264304f772..8ad0bbe838a24 100644 --- a/arch/riscv/include/asm/pgalloc.h +++ b/arch/riscv/include/asm/pgalloc.h @@ -12,7 +12,6 @@ #include #ifdef CONFIG_MMU -#define __HAVE_ARCH_PUD_ALLOC_ONE #define __HAVE_ARCH_PUD_FREE #include @@ -88,15 +87,6 @@ static inline void pgd_populate_safe(struct mm_struct *mm, pgd_t *pgd, } } -#define pud_alloc_one pud_alloc_one -static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr) -{ - if (pgtable_l4_enabled) - return __pud_alloc_one(mm, addr); - - return NULL; -} - #define pud_free pud_free static inline void pud_free(struct mm_struct *mm, pud_t *pud) { @@ -118,15 +108,11 @@ static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pud, #define p4d_alloc_one p4d_alloc_one static inline p4d_t *p4d_alloc_one(struct mm_struct *mm, unsigned long addr) { - if (pgtable_l5_enabled) { - gfp_t gfp = GFP_PGTABLE_USER; - - if (mm == &init_mm) - gfp = GFP_PGTABLE_KERNEL; - return (p4d_t *)get_zeroed_page(gfp); - } + gfp_t gfp = GFP_PGTABLE_USER; - return NULL; + if (mm == &init_mm) + gfp = GFP_PGTABLE_KERNEL; + return (p4d_t *)get_zeroed_page(gfp); } static inline void __p4d_free(struct mm_struct *mm, p4d_t *p4d) From patchwork Mon Dec 23 09:40:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13918652 Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E147E192D68 for ; Mon, 23 Dec 2024 09:43:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947023; cv=none; b=IEfO2vl3JHmmS1rWTCqimSqUadPpToE4mWS48Dsg1gpoJKQEQop+igKTTF1JDiceItA68ljF2gx/z1NYS/QaKgli9uGcd+lSgijSQuD3LUU6DMae65SKq2xj2YsAxweq+XeA5nfjcA0H1nc5jIwb1XLOpnVF30xTRKO13nLcA/4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947023; c=relaxed/simple; bh=VQrwZRR3rAwbwOGD5AjoKk2Z15695i3scVkWYvbG/zY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=lAKddvMzGHQ/ajrWRr0GKdoqdGQY+Vd4rk99nAe9oThl573DgvyUikaf290capVrBK8CQc8vARTSzVkvZmFuYhPFSe9/FZSPqY8KsGwhIzrB/n0GpZNHozo7aUSVHGzQgmXKHFc5btyVvu03AF2otg0GGASNnM101ACkNhVz7IY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=JPUmuaOP; arc=none smtp.client-ip=209.85.210.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="JPUmuaOP" Received: by mail-pf1-f176.google.com with SMTP id d2e1a72fcca58-728f28744c5so3822241b3a.1 for ; Mon, 23 Dec 2024 01:43:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1734947021; x=1735551821; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Hi4+N7L/R07Fq6g8WRpMxop4cD3t01a1BHl7awbkQjI=; b=JPUmuaOPuA4moEtR3+nkJAAS+UWaHQmu8+3/UVFPRQoDUa+aAB6ZPcBzTcHrdKjHQg eXTGAKDwiH8eLobHFSfjz2QegBQbj+1Hq9P8k+xW6btmrBNWchhurTRgXDkNVlOs3tDR nDqYSAJ/RQ7c6WJUQ/kM/TMl9uqlaB4CMQWBVDfKZdOLoRpLf3MpOGCUM2YnIM6flr/I XtJx0aXH5OWx7mNyq/AvlHUMO5iGkbAqihzPbbTFSSEEfcb7zDo3lzKpoW5/CAEImlsu cAf8YDW8S/ropBXYs+qTaHz2Apc8pdwRpDJw91YMy33glp3QSuPjrf/QtiWGe6Kl0qO8 4xng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734947021; x=1735551821; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Hi4+N7L/R07Fq6g8WRpMxop4cD3t01a1BHl7awbkQjI=; b=pZDWpHKoeJcIKHmu4iIRQvjYdJ7IKRB5jTOm6iIM+IqfGv6bhKumOYqldl9Ohq9RJc lkkfTyqtrPCCNaFNgr5rdJU6XM8hqnvsFoGTdEGALRaQlOOCSF+IexNJ8r9nB8Yq7Hqu kbk6YZNM5QqY1xvaysu8Npe6fpmj6BqEAgqCS7GBFY3TDuxKk/NpAK0Pn7sypfHxucDg x9aj0YWWWCn6jE0BHErZHvqlYirYJ+SgVb3wWnqj14otTG4PuT41gHRdzWaL8QAHe9sS pJMLbMk2geJBfYxXiuCdEi93tfn5g/zRaqzMMlPIhn+kiMMeERxH7LHLXaOOA/H2o3rz yfCA== X-Forwarded-Encrypted: i=1; AJvYcCVS08bpCDeFBlAvjCHhnABy+jIy07vQFC81V5vK5cLty1a99Z92dyn47ZdgUVvi53L2+n8u3TUKIg==@vger.kernel.org X-Gm-Message-State: AOJu0Yz+OgIyxw+fDlByu0z+ziIYDy9XTY5lgtui/iJEBrkeJVQ2okh4 0ny8arZGqTEgEWSYT/qZnOgMhJw6DdLr/mTRs3i25UPtA/OYMu1EFgeTXJ8Pfc0= X-Gm-Gg: ASbGnct9r88c1U5Mm2oYF6LFEW4N/0PoCwO+0jqBGrnTZWh66g4NY4NvR/rrkm9iWRs z3QVQLBENtZeu4K+q6+9CHl+lfqigt1CLvYuV8rEsWCWzhkwgrmOLoBFVM9OVVBlN9T65bkB2Wo eVtq1nFteecUc1NpJbjB8Kyc3vRzcELlZIWPMlu9NAbLLBt2H/17pmyNb9uJKUYLY7gG45PrkUs zslfg22d0XtZAOqwh76KM1AjcrXTj/dFQdEC9j/iLElDHjrNH6bk//ApLA7QTTfaq4FlNgpBaRD iGd/B7ZuiZvJfj4ESbwcNA== X-Google-Smtp-Source: AGHT+IHP/sWNqLjbiRRokya3RRzP3A7lT6GiLvDE2E3HVCuxopdI/+yKYyLcU7/i5H4uYS+gGEKb/A== X-Received: by 2002:a05:6a21:9017:b0:1e5:ddac:1eff with SMTP id adf61e73a8af0-1e5e04a0c7cmr19747203637.20.1734947021430; Mon, 23 Dec 2024 01:43:41 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72aad8486c6sm7468309b3a.85.2024.12.23.01.43.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Dec 2024 01:43:40 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v3 03/17] asm-generic: pgalloc: Provide generic p4d_{alloc_one,free} Date: Mon, 23 Dec 2024 17:40:49 +0800 Message-Id: <4c4bcc1aa565c6252183553aecd5e5cbd1a0f6ea.1734945104.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Kevin Brodsky Four architectures currently implement 5-level pgtables: arm64, riscv, x86 and s390. The first three have essentially the same implementation for p4d_alloc_one() and p4d_free(), so we've got an opportunity to reduce duplication like at the lower levels. Provide a generic version of p4d_alloc_one() and p4d_free(), and make use of it on those architectures. Their implementation is the same as at PUD level, except that p4d_free() performs a runtime check by calling mm_p4d_folded(). 5-level pgtables depend on a runtime-detected hardware feature on all supported architectures, so we might as well include this check in the generic implementation. No runtime check is required in p4d_alloc_one() as the top-level p4d_alloc() already does the required check. Signed-off-by: Kevin Brodsky Acked-by: Dave Hansen Signed-off-by: Qi Zheng --- arch/arm64/include/asm/pgalloc.h | 17 ------------ arch/riscv/include/asm/pgalloc.h | 23 ---------------- arch/x86/include/asm/pgalloc.h | 18 ------------- include/asm-generic/pgalloc.h | 45 ++++++++++++++++++++++++++++++++ 4 files changed, 45 insertions(+), 58 deletions(-) diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h index e75422864d1bd..2965f5a7e39e3 100644 --- a/arch/arm64/include/asm/pgalloc.h +++ b/arch/arm64/include/asm/pgalloc.h @@ -85,23 +85,6 @@ static inline void pgd_populate(struct mm_struct *mm, pgd_t *pgdp, p4d_t *p4dp) __pgd_populate(pgdp, __pa(p4dp), pgdval); } -static inline p4d_t *p4d_alloc_one(struct mm_struct *mm, unsigned long addr) -{ - gfp_t gfp = GFP_PGTABLE_USER; - - if (mm == &init_mm) - gfp = GFP_PGTABLE_KERNEL; - return (p4d_t *)get_zeroed_page(gfp); -} - -static inline void p4d_free(struct mm_struct *mm, p4d_t *p4d) -{ - if (!pgtable_l5_enabled()) - return; - BUG_ON((unsigned long)p4d & (PAGE_SIZE-1)); - free_page((unsigned long)p4d); -} - #define __p4d_free_tlb(tlb, p4d, addr) p4d_free((tlb)->mm, p4d) #else static inline void __pgd_populate(pgd_t *pgdp, phys_addr_t p4dp, pgdval_t prot) diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h index 8ad0bbe838a24..551d614d3369c 100644 --- a/arch/riscv/include/asm/pgalloc.h +++ b/arch/riscv/include/asm/pgalloc.h @@ -105,29 +105,6 @@ static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pud, } } -#define p4d_alloc_one p4d_alloc_one -static inline p4d_t *p4d_alloc_one(struct mm_struct *mm, unsigned long addr) -{ - gfp_t gfp = GFP_PGTABLE_USER; - - if (mm == &init_mm) - gfp = GFP_PGTABLE_KERNEL; - return (p4d_t *)get_zeroed_page(gfp); -} - -static inline void __p4d_free(struct mm_struct *mm, p4d_t *p4d) -{ - BUG_ON((unsigned long)p4d & (PAGE_SIZE-1)); - free_page((unsigned long)p4d); -} - -#define p4d_free p4d_free -static inline void p4d_free(struct mm_struct *mm, p4d_t *p4d) -{ - if (pgtable_l5_enabled) - __p4d_free(mm, p4d); -} - static inline void __p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d, unsigned long addr) { diff --git a/arch/x86/include/asm/pgalloc.h b/arch/x86/include/asm/pgalloc.h index dcd836b59bebd..dd4841231bb9f 100644 --- a/arch/x86/include/asm/pgalloc.h +++ b/arch/x86/include/asm/pgalloc.h @@ -147,24 +147,6 @@ static inline void pgd_populate_safe(struct mm_struct *mm, pgd_t *pgd, p4d_t *p4 set_pgd_safe(pgd, __pgd(_PAGE_TABLE | __pa(p4d))); } -static inline p4d_t *p4d_alloc_one(struct mm_struct *mm, unsigned long addr) -{ - gfp_t gfp = GFP_KERNEL_ACCOUNT; - - if (mm == &init_mm) - gfp &= ~__GFP_ACCOUNT; - return (p4d_t *)get_zeroed_page(gfp); -} - -static inline void p4d_free(struct mm_struct *mm, p4d_t *p4d) -{ - if (!pgtable_l5_enabled()) - return; - - BUG_ON((unsigned long)p4d & (PAGE_SIZE-1)); - free_page((unsigned long)p4d); -} - extern void ___p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d); static inline void __p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d, diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h index 7c48f5fbf8aa7..59131629ac9cc 100644 --- a/include/asm-generic/pgalloc.h +++ b/include/asm-generic/pgalloc.h @@ -215,6 +215,51 @@ static inline void pud_free(struct mm_struct *mm, pud_t *pud) #endif /* CONFIG_PGTABLE_LEVELS > 3 */ +#if CONFIG_PGTABLE_LEVELS > 4 + +static inline p4d_t *__p4d_alloc_one_noprof(struct mm_struct *mm, unsigned long addr) +{ + gfp_t gfp = GFP_PGTABLE_USER; + struct ptdesc *ptdesc; + + if (mm == &init_mm) + gfp = GFP_PGTABLE_KERNEL; + gfp &= ~__GFP_HIGHMEM; + + ptdesc = pagetable_alloc_noprof(gfp, 0); + if (!ptdesc) + return NULL; + + return ptdesc_address(ptdesc); +} +#define __p4d_alloc_one(...) alloc_hooks(__p4d_alloc_one_noprof(__VA_ARGS__)) + +#ifndef __HAVE_ARCH_P4D_ALLOC_ONE +static inline p4d_t *p4d_alloc_one_noprof(struct mm_struct *mm, unsigned long addr) +{ + return __p4d_alloc_one_noprof(mm, addr); +} +#define p4d_alloc_one(...) alloc_hooks(p4d_alloc_one_noprof(__VA_ARGS__)) +#endif + +static inline void __p4d_free(struct mm_struct *mm, p4d_t *p4d) +{ + struct ptdesc *ptdesc = virt_to_ptdesc(p4d); + + BUG_ON((unsigned long)p4d & (PAGE_SIZE-1)); + pagetable_free(ptdesc); +} + +#ifndef __HAVE_ARCH_P4D_FREE +static inline void p4d_free(struct mm_struct *mm, p4d_t *p4d) +{ + if (!mm_p4d_folded(mm)) + __p4d_free(mm, p4d); +} +#endif + +#endif /* CONFIG_PGTABLE_LEVELS > 4 */ + #ifndef __HAVE_ARCH_PGD_FREE static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) { From patchwork Mon Dec 23 09:40:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13918653 Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1C271198A36 for ; Mon, 23 Dec 2024 09:43:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947037; cv=none; b=HFu9LKhpuoML+BezXWZbxPyyjkm4MJ729nTjotLxKqwns/E3lgyHX79sAIvb8B6cWH71W3laQ781xtrkAidIMYgRqgSaeu7gQCbGPvEryp56AAzWFAYWHnYGSY3YZF3ko6xBamXbk+EwqVetsc3Bo9Zrg5ejeKjirvlZOucaU3M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947037; c=relaxed/simple; bh=iGGseqx9Ut5ViSPAG0PhwW+aGehMNfQFpL91q5ypQVw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=devBYzF6BhC/UvLIJF1GTnVdIIWP5LrR4NgR5TzAjLVzfyfn5+IUwA00ac9ZlCZxDSNLqwFKtgSJXBPWE4F7Go5L+LfMIiN9TwQJXBD7NaGoHfojKOHX0f0oR/oW6gaXVxY0Q7FKy0NyW7BOJ2pH7xcaV5p6DSVcVTlvBF+s8fE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=YwCBKelk; arc=none smtp.client-ip=209.85.210.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="YwCBKelk" Received: by mail-pf1-f176.google.com with SMTP id d2e1a72fcca58-7242f559a9fso4801210b3a.1 for ; Mon, 23 Dec 2024 01:43:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1734947034; x=1735551834; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=t6vBY23hDAeU19z0pXyQ9vuAeZavJyfGUN8npoqwlxU=; b=YwCBKelkhVnAI1YmArkuwZsKg+bbBtWoqB+ScUthCYx9q6p47hH0hsKMEKzrhMV2YP tqizLfTBPZRUzZfJaq0LM6WdhXkz3PTS6U00s8HNvZNiWLjp2cuJMt4RaaMGtOiROY6z k0NfEb9AUwZFiZGpgrgLi/5/rP2VMIQ57a4B2hJGzmOx6OxHwiCYa5ledkCFLLrOUykO 17gA31qAMa2PMLVXIhhg7H9eC3wt+XgPEjk6XKbjjAksWCZYm1AjWUFW959qNloZHSzA KUReK9etAoforD3CWYNgDGqIR+Q3smDqJ8R2Clw/e50YcV9P9ZV+78KuEvmYj39ncUFr SeKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734947034; x=1735551834; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=t6vBY23hDAeU19z0pXyQ9vuAeZavJyfGUN8npoqwlxU=; b=qm99xNSUwa2YQUrDPj2YqyEIqZ3q/2yc/xu4jvZ4D0H2xJIoxWYpi+9+RxqijKqwV0 asTMcp2QOmGRUiQ2BrDec458iFSiND4mObC3DfqplEw9sbG+oz0g5ZsABkAAxpqHeYYK 9bhTF2GqI2JSG8+IfuKd0iYaEgyqIGsHgLpidlHHG5+mE4yHLOtLVYJIX+bcT91qrnkN 119DZMd2aZPGuU33yA3A6HcF2C08DorL4oZkPCsVuU0LpzrKrTf7bDEUq3ghQxDsxnrU pWze08yRcLBL3G/1t7Z8iMwlyFs3NIoijAQNiJpeQyN3x6oTH06P5Wu1B2PfKqaf7XTp Vi+Q== X-Forwarded-Encrypted: i=1; AJvYcCWe/+ukN87nQYSvznd5lror6EKkO9/LrsBSyxyF70d28LUO/3pmZ1gy1lp4TaWw0gRUFI75LjJWHQ==@vger.kernel.org X-Gm-Message-State: AOJu0YwsZSjleJrp2wPe82yPyGwzAmm0d9cLqDmuVLjmInFv8dJN9Fo2 QrCfrKAtExrJg/BVPlJ0g0uzq/ENXxifNvQlKJx5g77N8TL3XkZ9Go2GUTCqGF8= X-Gm-Gg: ASbGncsZYG2QECKTRkIWqWgoseGoPRK48ToXC2NGPPpfSjjKoyGwcTgUvVtgoK0sd80 zW556WA0KFTm70uOkRUumu9HvQYrO9CEwwgqxAn9atjM4T/YncC1lTsvMmcnzGlMuQOXzcvgzAC +1cOonhtpafofIBjldIFV3fwuNesFJtZ7FbkuBr7Gm+H3vj8tBbMS1+zAdBGKvvdaLQYvvqeldG YxHoD6zFW84jZx6uPr3xT+9D7uNzEWyPpi8La9gXhI11bCbBSXCM8QepmSEDAAa1uYbThHh8KWr bM0hppTOR+ufRmEuvc92bw== X-Google-Smtp-Source: AGHT+IGO0+mWINscMdGqF/L7XDe6Dv/dfGXrIYILRYMXJn04aY3ZCG1rf5V2YGuLlSic6wG5VWjqsA== X-Received: by 2002:a05:6a00:4090:b0:72a:a7a4:b4cd with SMTP id d2e1a72fcca58-72abdeb85b1mr18289604b3a.21.1734947034102; Mon, 23 Dec 2024 01:43:54 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72aad8486c6sm7468309b3a.85.2024.12.23.01.43.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Dec 2024 01:43:53 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v3 04/17] mm: pgtable: add statistics for P4D level page table Date: Mon, 23 Dec 2024 17:40:50 +0800 Message-Id: <2fa644e37ab917292f5c342e40fa805aa91afbbd.1734945104.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Like other levels of page tables, add statistics for P4D level page table. Signed-off-by: Qi Zheng Originally-by: Peter Zijlstra (Intel) --- arch/riscv/include/asm/pgalloc.h | 6 +++++- arch/x86/mm/pgtable.c | 3 +++ include/asm-generic/pgalloc.h | 2 ++ include/linux/mm.h | 16 ++++++++++++++++ 4 files changed, 26 insertions(+), 1 deletion(-) diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h index 551d614d3369c..3466fbe2e508d 100644 --- a/arch/riscv/include/asm/pgalloc.h +++ b/arch/riscv/include/asm/pgalloc.h @@ -108,8 +108,12 @@ static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pud, static inline void __p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d, unsigned long addr) { - if (pgtable_l5_enabled) + if (pgtable_l5_enabled) { + struct ptdesc *ptdesc = virt_to_ptdesc(p4d); + + pagetable_p4d_dtor(ptdesc); riscv_tlb_remove_ptdesc(tlb, virt_to_ptdesc(p4d)); + } } #endif /* __PAGETABLE_PMD_FOLDED */ diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 69a357b15974a..3d6e84da45b24 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -94,6 +94,9 @@ void ___pud_free_tlb(struct mmu_gather *tlb, pud_t *pud) #if CONFIG_PGTABLE_LEVELS > 4 void ___p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d) { + struct ptdesc *ptdesc = virt_to_ptdesc(p4d); + + pagetable_p4d_dtor(ptdesc); paravirt_release_p4d(__pa(p4d) >> PAGE_SHIFT); paravirt_tlb_remove_table(tlb, virt_to_page(p4d)); } diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h index 59131629ac9cc..bb482eeca0c3e 100644 --- a/include/asm-generic/pgalloc.h +++ b/include/asm-generic/pgalloc.h @@ -230,6 +230,7 @@ static inline p4d_t *__p4d_alloc_one_noprof(struct mm_struct *mm, unsigned long if (!ptdesc) return NULL; + pagetable_p4d_ctor(ptdesc); return ptdesc_address(ptdesc); } #define __p4d_alloc_one(...) alloc_hooks(__p4d_alloc_one_noprof(__VA_ARGS__)) @@ -247,6 +248,7 @@ static inline void __p4d_free(struct mm_struct *mm, p4d_t *p4d) struct ptdesc *ptdesc = virt_to_ptdesc(p4d); BUG_ON((unsigned long)p4d & (PAGE_SIZE-1)); + pagetable_p4d_dtor(ptdesc); pagetable_free(ptdesc); } diff --git a/include/linux/mm.h b/include/linux/mm.h index c49bc7b764535..5d82f42ddd5cc 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3175,6 +3175,22 @@ static inline void pagetable_pud_dtor(struct ptdesc *ptdesc) lruvec_stat_sub_folio(folio, NR_PAGETABLE); } +static inline void pagetable_p4d_ctor(struct ptdesc *ptdesc) +{ + struct folio *folio = ptdesc_folio(ptdesc); + + __folio_set_pgtable(folio); + lruvec_stat_add_folio(folio, NR_PAGETABLE); +} + +static inline void pagetable_p4d_dtor(struct ptdesc *ptdesc) +{ + struct folio *folio = ptdesc_folio(ptdesc); + + __folio_clear_pgtable(folio); + lruvec_stat_sub_folio(folio, NR_PAGETABLE); +} + extern void __init pagecache_init(void); extern void free_initmem(void); From patchwork Mon Dec 23 09:40:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13918654 Received: from mail-pg1-f170.google.com (mail-pg1-f170.google.com [209.85.215.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8A6E619A2A3 for ; Mon, 23 Dec 2024 09:44:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947050; cv=none; b=rhNQqvB+AFZWVRxqoyxsvoR4C43Hb5A39sRR4I26EPu5w224Pr8SBBzcoqh5f9RWvYPyZ54mXeeuCVDZxo7Gfb+wRgpIDYMU70fhfuomQpngKuHElxL9gt9XLpvsLYWDEd3XZlI5wGl6JIGf1o06/EMiFAYouhD2ekgIcl8dkp4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947050; c=relaxed/simple; bh=jiyynG1cSpqMCUPKcTsBXQhnImP0khzMQlCaAfJaSPQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=NNqyhZMmOA0L9xreRZqIPkI7+5PEdheKeJDkemw1drF6bMAgh2PCK1MwfUGrhPOBizvWcRiM/Ea+KPBonDFxBNw6wcemYXvcGwG01floy2XF4FQjcxL0M0pIKUhLzi65cYqLpx2ikQEIwfjSvfe6sMK0xKjkZAQUy3sqQRIU8ts= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=l2fu92Xr; arc=none smtp.client-ip=209.85.215.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="l2fu92Xr" Received: by mail-pg1-f170.google.com with SMTP id 41be03b00d2f7-7fc99fc2b16so2431289a12.3 for ; Mon, 23 Dec 2024 01:44:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1734947047; x=1735551847; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=eYmLHVK0AtU6ilI7ucnXt5jLNeYE7S1uMrvnVL/VK/o=; b=l2fu92XrLbHe5vKbq8TlynuLMNL/slbV8aqe9gAKoq6PUKk4R39Tw71RcoiemwvA3V ccpUwy71aYHNzaXMwEyBiinbk6QGTT59LJKntkwb+/uPWqaJy+Hae4Ol43D/77ZJ15Zt PzNYD45G+KI181VY0KP4MnbbCpy5kYt9eoOEIUnuLBiL4+4vA0/6NsmBXMXBOhy5lp/G 3FjcSgNIWb2upkwDqTTw7pJreMiz/lnQNM88ydOkaHa8GR7xfcYYyO+i7lRUwx4MQnsY 5UH3m7mQUpS0PHy73O361BxTO+zBJGOWgU7CgCjhLMEdQn7IPTdAFOh7k+Ka67BkUf85 eLXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734947047; x=1735551847; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eYmLHVK0AtU6ilI7ucnXt5jLNeYE7S1uMrvnVL/VK/o=; b=ea6B7yO8d8BsFRuiJ41cvfHkJ8aARqGSW+/P1kjyNB1Ld3XeHfMJV7fEHvcdR+t9H0 jt45d+URBpHQGWmMB5y1/HuQGL64Jfey7X6vFWARvOu5ysgG///wALPyc5UC842Xdrwy hV6tyY1gjK4rHHaIM9dPiCDnl5d1/69+FcOv1j71vhZdioX65PrX01S1kDj+oqnZtK45 l91J2VjnW1RQvFSYR1c4+P88sSK9lGhv2V/amcx/AIvm37EX0QmDl9SK26eaJqcJWDB8 xKVBUlR2TKYYh5aMd3gdCtVpv5Ig7peQ1UZN216U8aRq3HYcBypEPPVh/PrdMIoFCAYm FH3A== X-Forwarded-Encrypted: i=1; AJvYcCWvyc/HBP/XeUY/rCsii992uCUPzAqbHO34eTbu/XumwfpLWUo4EZAdT9zTZUI0E0kxwQXCuLqrbg==@vger.kernel.org X-Gm-Message-State: AOJu0YzWW/5oQ+69Xgt9R/YXmOHaFx/H/RBhAd4Ri+4DJAZZFZ2OC82c 2liFv0ZNnHIazCz7/rHOe0YNZBvjTtkTw1x+h2E0Mf8wDDnWH2UTJ9ZFZkwL4bk= X-Gm-Gg: ASbGncsSU57yrUKYZYUgyqLi9191Kr9YSw3joL1IIU548zDQqwnV5duQ8XTtxQD3iHt eV+Q8mXAQk7mGENvzBfAKqt6ZX6DUIJAZxl4XIAlkykc8iVimZ2gYGZfJ5xbVhE025o4l0PfM3o MDF/5W7DpUK2bZiMHHPiMZAexxBpdregihVXe8hYd48esYWD4CTF0cKi/0ArPctI66HqAJFzwzx x+UV2d9u0IeoqQgZuhJueDYb4/AjFcCOZvUENWzW3TRy9ji/0WgopbSB3sRKa8NPwWj9Oe8cWb0 2eHywl1xXOEpXYqBDW1Dvg== X-Google-Smtp-Source: AGHT+IEE3DAo82DBKWUM6FD2Qxwd8GTGkBziQ/dfcNtaRbY+efEiB7FXaTM2aY8g2l19VVq60zkLag== X-Received: by 2002:a05:6a20:7f82:b0:1e0:d575:8d4f with SMTP id adf61e73a8af0-1e5e081c5acmr19810935637.37.1734947046731; Mon, 23 Dec 2024 01:44:06 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72aad8486c6sm7468309b3a.85.2024.12.23.01.43.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Dec 2024 01:44:06 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v3 05/17] arm64: pgtable: use mmu gather to free p4d level page table Date: Mon, 23 Dec 2024 17:40:51 +0800 Message-Id: <7c12112047ac230809aacd0379259414b9b0d3a3.1734945104.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Like other levels of page tables, also use mmu gather mechanism to free p4d level page table. Signed-off-by: Qi Zheng Originally-by: Peter Zijlstra (Intel) Cc: linux-arm-kernel@lists.infradead.org --- arch/arm64/include/asm/pgalloc.h | 1 - arch/arm64/include/asm/tlb.h | 14 ++++++++++++++ 2 files changed, 14 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h index 2965f5a7e39e3..1b4509d3382c6 100644 --- a/arch/arm64/include/asm/pgalloc.h +++ b/arch/arm64/include/asm/pgalloc.h @@ -85,7 +85,6 @@ static inline void pgd_populate(struct mm_struct *mm, pgd_t *pgdp, p4d_t *p4dp) __pgd_populate(pgdp, __pa(p4dp), pgdval); } -#define __p4d_free_tlb(tlb, p4d, addr) p4d_free((tlb)->mm, p4d) #else static inline void __pgd_populate(pgd_t *pgdp, phys_addr_t p4dp, pgdval_t prot) { diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index a947c6e784ed2..445282cde9afb 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -111,4 +111,18 @@ static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pudp, } #endif +#if CONFIG_PGTABLE_LEVELS > 4 +static inline void __p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4dp, + unsigned long addr) +{ + struct ptdesc *ptdesc = virt_to_ptdesc(p4dp); + + if (!pgtable_l5_enabled()) + return; + + pagetable_p4d_dtor(ptdesc); + tlb_remove_ptdesc(tlb, ptdesc); +} +#endif + #endif From patchwork Mon Dec 23 09:40:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13918684 Received: from mail-pg1-f178.google.com (mail-pg1-f178.google.com [209.85.215.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 824AF192D91 for ; Mon, 23 Dec 2024 09:44:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947062; cv=none; b=Xa3TJIZCHqhhVVZ/RS7D6I1PeyTGogjom5Rp/0m/HR8D/ykb/vfQLVMOBO+Bi0QDi9ROqtYBe43+mPYbbcErHrEJxa0FTCz6+ck32JWG/QNF6Bp7nFXDn3eTDRF/rKir+0vqOrvskbU5ww8r7qHJkrR1PWNJ7QZB/w4db85wIhU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947062; c=relaxed/simple; bh=gsn2MxaDHMIkVDeNJoAZ1GXzg4UbDMQdMp9AlgDJXWI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=avVYu+BvJbFxeaeW1iGZNdq/jiNwdTOPeqLU9B3Yy3ZA7xiWzKSMM4PrkqpNGNyFa1cZlfL/U0x+JGR2srwpKQWYv0OUMlrPgbak2BXWmSqeB+3LUjwsIWuD82NX4Pm//V9QpeWbAJRYNMVBCHS4QPJB3+x6xMsgdtCa+KzdkRg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=VqAbwFY+; arc=none smtp.client-ip=209.85.215.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="VqAbwFY+" Received: by mail-pg1-f178.google.com with SMTP id 41be03b00d2f7-7fc88476a02so3087750a12.2 for ; Mon, 23 Dec 2024 01:44:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1734947060; x=1735551860; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xY41yoAggnrpKZDOvMIy0+FhpPRF9QFn+bDK7HKPzWw=; b=VqAbwFY+mxfHi24wSoLfeFH9u+D1/KxNf/fdg4uD/4bg61xhkNTGf4yKJXZo5XDk8v US3r5OvglLHzKlic2ukiek2bj0BKfKhwiq5abR1rjMk4h2sT7hxuH7kYIcGfvJN7hmIl NpU8ovDEiYsPvujSMsQdC+VSEnHHo6NzQ9P6VSmcpk1o7MKw4una5zLtHg4LcTLKAoVT sSLPHusPUK63ztodU6/MPaFHL/QX+dspXnSlTUtauT85SM/jfSP4YsWiMkOnJZZTmaU/ W+f+S+roHE34myJIjQ+Nxd4+zc119g7+jpLtI5Yi6upYQnyGbh7XdXKG5mmSWX+n4F0a oA7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734947060; x=1735551860; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xY41yoAggnrpKZDOvMIy0+FhpPRF9QFn+bDK7HKPzWw=; b=ZunzT7oBp6lR5XiFHcy2BOKB0M7L2YdEpc8ET0pv5xihdZUOtq4r4enjoRbFsqF3+K rhiqKB3VYXJGDWlmNA/ejieZ9p5F5PfSQdIrZlVFZGFYrAjzG4LVtx/HDajk1XslXBpy XJ1KWTKzW+UyZ2AIpS8X5Wc8TsIdcbS/To+ew4gAR3t+ZJ9a038nP6GoTuXd9dOQhBJ/ 5SdMLrXfFWwp/KTAqXXMC9PKotqurAVLX2jGI8OdzjIwNbFwZZV6zeaJ5mQsLltF0U6R pDIVnO+U0d6ou0aVt5MOcEdcql98gKcj0XWeoQwuj8igxlPV2dPaqZSLEvtPFeEEXpWu Hn9g== X-Forwarded-Encrypted: i=1; AJvYcCV3obs6rBltj6stuwuRZHeD3Py0YgqgTGAYI2T3oGs+5lsEoUZATeGRWmmGbzNH4/CwKcNB3/uGFg==@vger.kernel.org X-Gm-Message-State: AOJu0Yzk1aYxoLk5niz4V21oxmwRVnolOzKrZAmfgA884mjP5j1o5Coy 791Se73l03yDPZwBN6w5C6onNGG0Ol9ll3d/ui3o8wI4JUhMg1WcHAhqbJL/jG8= X-Gm-Gg: ASbGncv8ZpCJPD05T8EKQTC7jBWWIVBE4e2oPKJ6Qnciv01Cm/Fs9eb6s6n0+GE2Z5s dmZ6BM0l6f7n6THAkxRLA4IJcvXhkurXnBPAkfDioVFy2iTgYhVXLC3g/V2GmR9tEddRncQOwfc 3H88SZoTGtXWMk/KPJqwgz8jSo6MPV3AoveDNjBC1Sk2UjxKqiqoVexsrC8CHLDJ0ZOcldsruvF ZnU887DhWJi96vNqATmLvVyvtLoa47oV42ASwpWnQzel1NBR1vcMygzz5QaT7reA2fF9EhSw3NX IySXO6NRMRuFxRDgWUI7Kg== X-Google-Smtp-Source: AGHT+IF7O/ZMCvOOtywTANPL1zwoTq06yXXDMhnBrPgrfov3E/r1DzhOhOYD8wImAojzeP7TdceoFw== X-Received: by 2002:a05:6a21:3285:b0:1e1:aad7:d50d with SMTP id adf61e73a8af0-1e5e084b681mr22018514637.46.1734947059944; Mon, 23 Dec 2024 01:44:19 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72aad8486c6sm7468309b3a.85.2024.12.23.01.44.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Dec 2024 01:44:19 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v3 06/17] s390: pgtable: add statistics for PUD and P4D level page table Date: Mon, 23 Dec 2024 17:40:52 +0800 Message-Id: <35be22a2b1666df729a9fc108c2da5cce266e4be.1734945104.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Like PMD and PTE level page table, also add statistics for PUD and P4D page table. Signed-off-by: Qi Zheng Suggested-by: Peter Zijlstra (Intel) Cc: linux-s390@vger.kernel.org --- arch/s390/include/asm/pgalloc.h | 29 +++++++++++++++++++------- arch/s390/include/asm/tlb.h | 37 +++++++++++++++++---------------- 2 files changed, 40 insertions(+), 26 deletions(-) diff --git a/arch/s390/include/asm/pgalloc.h b/arch/s390/include/asm/pgalloc.h index 7b84ef6dc4b6d..a0c1ca5d8423c 100644 --- a/arch/s390/include/asm/pgalloc.h +++ b/arch/s390/include/asm/pgalloc.h @@ -53,29 +53,42 @@ static inline p4d_t *p4d_alloc_one(struct mm_struct *mm, unsigned long address) { unsigned long *table = crst_table_alloc(mm); - if (table) - crst_table_init(table, _REGION2_ENTRY_EMPTY); + if (!table) + return NULL; + crst_table_init(table, _REGION2_ENTRY_EMPTY); + pagetable_p4d_ctor(virt_to_ptdesc(table)); + return (p4d_t *) table; } static inline void p4d_free(struct mm_struct *mm, p4d_t *p4d) { - if (!mm_p4d_folded(mm)) - crst_table_free(mm, (unsigned long *) p4d); + if (mm_p4d_folded(mm)) + return; + + pagetable_p4d_dtor(virt_to_ptdesc(p4d)); + crst_table_free(mm, (unsigned long *) p4d); } static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long address) { unsigned long *table = crst_table_alloc(mm); - if (table) - crst_table_init(table, _REGION3_ENTRY_EMPTY); + + if (!table) + return NULL; + crst_table_init(table, _REGION3_ENTRY_EMPTY); + pagetable_pud_ctor(virt_to_ptdesc(table)); + return (pud_t *) table; } static inline void pud_free(struct mm_struct *mm, pud_t *pud) { - if (!mm_pud_folded(mm)) - crst_table_free(mm, (unsigned long *) pud); + if (mm_pud_folded(mm)) + return; + + pagetable_pud_dtor(virt_to_ptdesc(pud)); + crst_table_free(mm, (unsigned long *) pud); } static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long vmaddr) diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h index e95b2c8081eb8..b946964afce8e 100644 --- a/arch/s390/include/asm/tlb.h +++ b/arch/s390/include/asm/tlb.h @@ -110,24 +110,6 @@ static inline void pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd, tlb_remove_ptdesc(tlb, pmd); } -/* - * p4d_free_tlb frees a pud table and clears the CRSTE for the - * region second table entry from the tlb. - * If the mm uses a four level page table the single p4d is freed - * as the pgd. p4d_free_tlb checks the asce_limit against 8PB - * to avoid the double free of the p4d in this case. - */ -static inline void p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d, - unsigned long address) -{ - if (mm_p4d_folded(tlb->mm)) - return; - __tlb_adjust_range(tlb, address, PAGE_SIZE); - tlb->mm->context.flush_mm = 1; - tlb->freed_tables = 1; - tlb_remove_ptdesc(tlb, p4d); -} - /* * pud_free_tlb frees a pud table and clears the CRSTE for the * region third table entry from the tlb. @@ -140,11 +122,30 @@ static inline void pud_free_tlb(struct mmu_gather *tlb, pud_t *pud, { if (mm_pud_folded(tlb->mm)) return; + pagetable_pud_dtor(virt_to_ptdesc(pud)); tlb->mm->context.flush_mm = 1; tlb->freed_tables = 1; tlb->cleared_p4ds = 1; tlb_remove_ptdesc(tlb, pud); } +/* + * p4d_free_tlb frees a p4d table and clears the CRSTE for the + * region second table entry from the tlb. + * If the mm uses a four level page table the single p4d is freed + * as the pgd. p4d_free_tlb checks the asce_limit against 8PB + * to avoid the double free of the p4d in this case. + */ +static inline void p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d, + unsigned long address) +{ + if (mm_p4d_folded(tlb->mm)) + return; + pagetable_p4d_dtor(virt_to_ptdesc(p4d)); + __tlb_adjust_range(tlb, address, PAGE_SIZE); + tlb->mm->context.flush_mm = 1; + tlb->freed_tables = 1; + tlb_remove_ptdesc(tlb, p4d); +} #endif /* _S390_TLB_H */ From patchwork Mon Dec 23 09:40:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13918685 Received: from mail-pf1-f169.google.com (mail-pf1-f169.google.com [209.85.210.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 95CC31A0732 for ; Mon, 23 Dec 2024 09:44:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947076; cv=none; b=aYHm1ZSzgHd1tEVuY9IUdDUbMDQ6qbFr6QElNGb8LI1GP/VgaixjW+5oUM5JA8wpqsL+hb3kJOhR8V0P9lJRABLkMHCTfc/PXm52znAwWXbaBjZpNbwyAbKCPXXEVWXuy3QfNiMUkU/KkSW3Sohlf7wM4VmTlcQRIbVhgWxzTj8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947076; c=relaxed/simple; bh=RLIPEsUjaxmI3JDqp+aULApH2pPota4RtMIKgFUGpCI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Ocy9eC2kBZm0WGLaBWVCRL9XhunpPF3YOwLrx/+oi/gNc2Cdj0i8zBwfWnhsC8xP0sFBhtMSzD1Xg/SxamjyCajzaKgySJ7mzS8gapIBDOVFSE0uSFrQ+E6BAc/H+e9pnWGiIFil3c3C0RoI95sZXdZu5mvh2C36oPmQz/9YXN0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=bYF10o/g; arc=none smtp.client-ip=209.85.210.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="bYF10o/g" Received: by mail-pf1-f169.google.com with SMTP id d2e1a72fcca58-725dbdf380aso3138505b3a.3 for ; Mon, 23 Dec 2024 01:44:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1734947073; x=1735551873; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=if0a5qgfKyLZEs2UFcbOJ2pRLoezhNdJsDQpJdHhbkw=; b=bYF10o/gaI3bHxyn7k+fSdVr1mEQ/tnHc/7XSzv96kYLZjU+wvVGxPLFjRSCSjflLM TaKvG46ZjGUUYeY3k/mTUeHWHwOpQ9x2ZIHKaJ6yuUjyrvN637CoSazBVpl/qL8cZYkk s2IvdQA/URONJ45ouuxEuUEI9A3veAp4E+0QZnXNjO9se9GJKma6LtiYqST/s2ccj+mv 8xOnN3CnD+Yz1u76mDmUVJNyYOqHcIZhi5xsaVximu3rBtSv3matBv3CfHCT8zu+weWu o4Db/qLovp7c2Tmihc23mbyQ5eHiwzm2G2sZdYdSP5mD/SaNohBtd4wkHZUOMJTFu74H BUNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734947073; x=1735551873; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=if0a5qgfKyLZEs2UFcbOJ2pRLoezhNdJsDQpJdHhbkw=; b=ZwtzRIj+5CjYNVLUdtv/ELRzz2hPcrpTWcAOoyHNyc4G8DcUSz69V6wik7ZGyBOyId ajhuTeyCBhw9MPaFHo6IV1afaahleHulDKElhWQO/LV2LLC+DmiTglApRPMLXir0GMG5 dUKVQXV7ktcQoobIuoDsdSx1aBOzs9j9gwa4vbg/dV7+fXkz6EB1Bqt+wEXGz6kWGme6 0DNmD7MeMdx16Tof2/OLVUGE4LkKSo2H/304VOUAEXBYUb4d8bewSiqd2oIeKEkxs5fj NdEq5ORi9ZWnKMB3JN5frZRJscxQ37amXVvLrcQGa4i6MCPSB00P32YbZa8W2Te+7oq6 Bnqw== X-Forwarded-Encrypted: i=1; AJvYcCXqz1cX1fkYtGk/3KzCerXUpUnN8G626FMKNFUCuZMugvAr8xbI9DP3qrmjj3eNFbxAH7aB0P6aHA==@vger.kernel.org X-Gm-Message-State: AOJu0Yw7Rn8g+mq2WgSp8SW/CGvmcTK23A57LCms5LS/0XrHxijB8+LW gRW2XqqRiopBXZFgwRlBcPUQ8l+Hm7+fwtRsg09Zkrim3uJ0EAMp+hZzmTHHzHs= X-Gm-Gg: ASbGncs/Zx7qHKF3xaX0r0uIbcG5FBsCoDvGBlpi2KYhQLpYYELltpGGu21SRp/R2y0 tqBisPjRl1bYdtIgz7UaMrF1ctBE9NtHEUCVX1gbhOP48Ex+mgw6LHUMv7X5oAY3iojkuYHKmSl 8GBeHCWHySD+aN4NVioa5phqezDehs443aGWEdFLOfz4LB13GF47Sup0zSC3/UlJnJlHv5EAOYi 8VsVwUgr6e09bsStmcTt61MJ/RV9PzbjoNi72+/uDI3ebSPvsHKsNpfqQNhpLj/NI1lBstQm/Yu RV+4olnF23x6YqfpnJuDRA== X-Google-Smtp-Source: AGHT+IHNcG21n8t2Tfuvp8LnDtpoSzOkwaxGY/BugzvHG6e31xXuyXvNhN+O2xx2uo+0V3sMDp7pew== X-Received: by 2002:a05:6a21:6f02:b0:1e5:b082:e38f with SMTP id adf61e73a8af0-1e5e081d16cmr18629210637.45.1734947072789; Mon, 23 Dec 2024 01:44:32 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72aad8486c6sm7468309b3a.85.2024.12.23.01.44.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Dec 2024 01:44:32 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v3 07/17] mm: pgtable: introduce pagetable_dtor() Date: Mon, 23 Dec 2024 17:40:53 +0800 Message-Id: <8ada95453180c71b7fca92b9a9f11fa0f92d45a6.1734945104.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The pagetable_p*_dtor() are exactly the same except for the handling of ptlock. If we make ptlock_free() handle the case where ptdesc->ptl is NULL and remove VM_BUG_ON_PAGE() from pmd_ptlock_free(), we can unify pagetable_p*_dtor() into one function. Let's introduce pagetable_dtor() to do this. Later, pagetable_dtor() will be moved to tlb_remove_ptdesc(), so that ptlock and page table pages can be freed together (regardless of whether RCU is used). This prevents the use-after-free problem where the ptlock is freed immediately but the page table pages is freed later via RCU. Signed-off-by: Qi Zheng Originally-by: Peter Zijlstra (Intel) --- Documentation/mm/split_page_table_lock.rst | 4 +- arch/arm/include/asm/tlb.h | 4 +- arch/arm64/include/asm/tlb.h | 8 ++-- arch/csky/include/asm/pgalloc.h | 2 +- arch/hexagon/include/asm/pgalloc.h | 2 +- arch/loongarch/include/asm/pgalloc.h | 2 +- arch/m68k/include/asm/mcf_pgalloc.h | 4 +- arch/m68k/include/asm/sun3_pgalloc.h | 2 +- arch/m68k/mm/motorola.c | 2 +- arch/mips/include/asm/pgalloc.h | 2 +- arch/nios2/include/asm/pgalloc.h | 2 +- arch/openrisc/include/asm/pgalloc.h | 2 +- arch/powerpc/mm/book3s64/mmu_context.c | 2 +- arch/powerpc/mm/book3s64/pgtable.c | 2 +- arch/powerpc/mm/pgtable-frag.c | 4 +- arch/riscv/include/asm/pgalloc.h | 8 ++-- arch/riscv/mm/init.c | 4 +- arch/s390/include/asm/pgalloc.h | 6 +-- arch/s390/include/asm/tlb.h | 6 +-- arch/s390/mm/pgalloc.c | 2 +- arch/sh/include/asm/pgalloc.h | 2 +- arch/sparc/mm/init_64.c | 2 +- arch/sparc/mm/srmmu.c | 2 +- arch/um/include/asm/pgalloc.h | 6 +-- arch/x86/mm/pgtable.c | 12 ++--- include/asm-generic/pgalloc.h | 8 ++-- include/linux/mm.h | 52 ++++------------------ mm/memory.c | 3 +- 28 files changed, 62 insertions(+), 95 deletions(-) diff --git a/Documentation/mm/split_page_table_lock.rst b/Documentation/mm/split_page_table_lock.rst index 581446d4a4eba..8e1ceb0a6619a 100644 --- a/Documentation/mm/split_page_table_lock.rst +++ b/Documentation/mm/split_page_table_lock.rst @@ -62,7 +62,7 @@ Support of split page table lock by an architecture =================================================== There's no need in special enabling of PTE split page table lock: everything -required is done by pagetable_pte_ctor() and pagetable_pte_dtor(), which +required is done by pagetable_pte_ctor() and pagetable_dtor(), which must be called on PTE table allocation / freeing. Make sure the architecture doesn't use slab allocator for page table @@ -73,7 +73,7 @@ PMD split lock only makes sense if you have more than two page table levels. PMD split lock enabling requires pagetable_pmd_ctor() call on PMD table -allocation and pagetable_pmd_dtor() on freeing. +allocation and pagetable_dtor() on freeing. Allocation usually happens in pmd_alloc_one(), freeing in pmd_free() and pmd_free_tlb(), but make sure you cover all PMD table allocation / freeing diff --git a/arch/arm/include/asm/tlb.h b/arch/arm/include/asm/tlb.h index f40d06ad5d2a3..ef79bf1e8563f 100644 --- a/arch/arm/include/asm/tlb.h +++ b/arch/arm/include/asm/tlb.h @@ -41,7 +41,7 @@ __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr) { struct ptdesc *ptdesc = page_ptdesc(pte); - pagetable_pte_dtor(ptdesc); + pagetable_dtor(ptdesc); #ifndef CONFIG_ARM_LPAE /* @@ -61,7 +61,7 @@ __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr) #ifdef CONFIG_ARM_LPAE struct ptdesc *ptdesc = virt_to_ptdesc(pmdp); - pagetable_pmd_dtor(ptdesc); + pagetable_dtor(ptdesc); tlb_remove_ptdesc(tlb, ptdesc); #endif } diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index 445282cde9afb..408d0f36a8a8f 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -82,7 +82,7 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, { struct ptdesc *ptdesc = page_ptdesc(pte); - pagetable_pte_dtor(ptdesc); + pagetable_dtor(ptdesc); tlb_remove_ptdesc(tlb, ptdesc); } @@ -92,7 +92,7 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, { struct ptdesc *ptdesc = virt_to_ptdesc(pmdp); - pagetable_pmd_dtor(ptdesc); + pagetable_dtor(ptdesc); tlb_remove_ptdesc(tlb, ptdesc); } #endif @@ -106,7 +106,7 @@ static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pudp, if (!pgtable_l4_enabled()) return; - pagetable_pud_dtor(ptdesc); + pagetable_dtor(ptdesc); tlb_remove_ptdesc(tlb, ptdesc); } #endif @@ -120,7 +120,7 @@ static inline void __p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4dp, if (!pgtable_l5_enabled()) return; - pagetable_p4d_dtor(ptdesc); + pagetable_dtor(ptdesc); tlb_remove_ptdesc(tlb, ptdesc); } #endif diff --git a/arch/csky/include/asm/pgalloc.h b/arch/csky/include/asm/pgalloc.h index 9c84c9012e534..f1ce5b7b28f22 100644 --- a/arch/csky/include/asm/pgalloc.h +++ b/arch/csky/include/asm/pgalloc.h @@ -63,7 +63,7 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm) #define __pte_free_tlb(tlb, pte, address) \ do { \ - pagetable_pte_dtor(page_ptdesc(pte)); \ + pagetable_dtor(page_ptdesc(pte)); \ tlb_remove_page_ptdesc(tlb, page_ptdesc(pte)); \ } while (0) diff --git a/arch/hexagon/include/asm/pgalloc.h b/arch/hexagon/include/asm/pgalloc.h index 55988625e6fbc..40e42a0e71673 100644 --- a/arch/hexagon/include/asm/pgalloc.h +++ b/arch/hexagon/include/asm/pgalloc.h @@ -89,7 +89,7 @@ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, #define __pte_free_tlb(tlb, pte, addr) \ do { \ - pagetable_pte_dtor((page_ptdesc(pte))); \ + pagetable_dtor((page_ptdesc(pte))); \ tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ } while (0) diff --git a/arch/loongarch/include/asm/pgalloc.h b/arch/loongarch/include/asm/pgalloc.h index a7b9c9e73593d..7211dff8c969e 100644 --- a/arch/loongarch/include/asm/pgalloc.h +++ b/arch/loongarch/include/asm/pgalloc.h @@ -57,7 +57,7 @@ static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm) #define __pte_free_tlb(tlb, pte, address) \ do { \ - pagetable_pte_dtor(page_ptdesc(pte)); \ + pagetable_dtor(page_ptdesc(pte)); \ tlb_remove_page_ptdesc((tlb), page_ptdesc(pte)); \ } while (0) diff --git a/arch/m68k/include/asm/mcf_pgalloc.h b/arch/m68k/include/asm/mcf_pgalloc.h index 302c5bf67179e..22d6c1fcabfb4 100644 --- a/arch/m68k/include/asm/mcf_pgalloc.h +++ b/arch/m68k/include/asm/mcf_pgalloc.h @@ -37,7 +37,7 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pgtable, { struct ptdesc *ptdesc = virt_to_ptdesc(pgtable); - pagetable_pte_dtor(ptdesc); + pagetable_dtor(ptdesc); pagetable_free(ptdesc); } @@ -61,7 +61,7 @@ static inline void pte_free(struct mm_struct *mm, pgtable_t pgtable) { struct ptdesc *ptdesc = virt_to_ptdesc(pgtable); - pagetable_pte_dtor(ptdesc); + pagetable_dtor(ptdesc); pagetable_free(ptdesc); } diff --git a/arch/m68k/include/asm/sun3_pgalloc.h b/arch/m68k/include/asm/sun3_pgalloc.h index 4a137eecb6fe4..2b626cb3ad0ae 100644 --- a/arch/m68k/include/asm/sun3_pgalloc.h +++ b/arch/m68k/include/asm/sun3_pgalloc.h @@ -19,7 +19,7 @@ extern const char bad_pmd_string[]; #define __pte_free_tlb(tlb, pte, addr) \ do { \ - pagetable_pte_dtor(page_ptdesc(pte)); \ + pagetable_dtor(page_ptdesc(pte)); \ tlb_remove_page_ptdesc((tlb), page_ptdesc(pte)); \ } while (0) diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c index c1761d309fc61..81715cece70c6 100644 --- a/arch/m68k/mm/motorola.c +++ b/arch/m68k/mm/motorola.c @@ -201,7 +201,7 @@ int free_pointer_table(void *table, int type) list_del(dp); mmu_page_dtor((void *)page); if (type == TABLE_PTE) - pagetable_pte_dtor(virt_to_ptdesc((void *)page)); + pagetable_dtor(virt_to_ptdesc((void *)page)); free_page (page); return 1; } else if (ptable_list[type].next != dp) { diff --git a/arch/mips/include/asm/pgalloc.h b/arch/mips/include/asm/pgalloc.h index f4440edcd8fe2..36d9805033c4b 100644 --- a/arch/mips/include/asm/pgalloc.h +++ b/arch/mips/include/asm/pgalloc.h @@ -56,7 +56,7 @@ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) #define __pte_free_tlb(tlb, pte, address) \ do { \ - pagetable_pte_dtor(page_ptdesc(pte)); \ + pagetable_dtor(page_ptdesc(pte)); \ tlb_remove_page_ptdesc((tlb), page_ptdesc(pte)); \ } while (0) diff --git a/arch/nios2/include/asm/pgalloc.h b/arch/nios2/include/asm/pgalloc.h index ce6bb8e74271f..12a536b7bfbd4 100644 --- a/arch/nios2/include/asm/pgalloc.h +++ b/arch/nios2/include/asm/pgalloc.h @@ -30,7 +30,7 @@ extern pgd_t *pgd_alloc(struct mm_struct *mm); #define __pte_free_tlb(tlb, pte, addr) \ do { \ - pagetable_pte_dtor(page_ptdesc(pte)); \ + pagetable_dtor(page_ptdesc(pte)); \ tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ } while (0) diff --git a/arch/openrisc/include/asm/pgalloc.h b/arch/openrisc/include/asm/pgalloc.h index c6a73772a5466..596e2355824e3 100644 --- a/arch/openrisc/include/asm/pgalloc.h +++ b/arch/openrisc/include/asm/pgalloc.h @@ -68,7 +68,7 @@ extern pte_t *pte_alloc_one_kernel(struct mm_struct *mm); #define __pte_free_tlb(tlb, pte, addr) \ do { \ - pagetable_pte_dtor(page_ptdesc(pte)); \ + pagetable_dtor(page_ptdesc(pte)); \ tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ } while (0) diff --git a/arch/powerpc/mm/book3s64/mmu_context.c b/arch/powerpc/mm/book3s64/mmu_context.c index 1715b07c630c9..4e1e45420bd49 100644 --- a/arch/powerpc/mm/book3s64/mmu_context.c +++ b/arch/powerpc/mm/book3s64/mmu_context.c @@ -253,7 +253,7 @@ static void pmd_frag_destroy(void *pmd_frag) count = ((unsigned long)pmd_frag & ~PAGE_MASK) >> PMD_FRAG_SIZE_SHIFT; /* We allow PTE_FRAG_NR fragments from a PTE page */ if (atomic_sub_and_test(PMD_FRAG_NR - count, &ptdesc->pt_frag_refcount)) { - pagetable_pmd_dtor(ptdesc); + pagetable_dtor(ptdesc); pagetable_free(ptdesc); } } diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c index 3745425280808..3f28e4acd920b 100644 --- a/arch/powerpc/mm/book3s64/pgtable.c +++ b/arch/powerpc/mm/book3s64/pgtable.c @@ -477,7 +477,7 @@ void pmd_fragment_free(unsigned long *pmd) BUG_ON(atomic_read(&ptdesc->pt_frag_refcount) <= 0); if (atomic_dec_and_test(&ptdesc->pt_frag_refcount)) { - pagetable_pmd_dtor(ptdesc); + pagetable_dtor(ptdesc); pagetable_free(ptdesc); } } diff --git a/arch/powerpc/mm/pgtable-frag.c b/arch/powerpc/mm/pgtable-frag.c index e89f64a0f24ae..713268ccb1a0e 100644 --- a/arch/powerpc/mm/pgtable-frag.c +++ b/arch/powerpc/mm/pgtable-frag.c @@ -25,7 +25,7 @@ void pte_frag_destroy(void *pte_frag) count = ((unsigned long)pte_frag & ~PAGE_MASK) >> PTE_FRAG_SIZE_SHIFT; /* We allow PTE_FRAG_NR fragments from a PTE page */ if (atomic_sub_and_test(PTE_FRAG_NR - count, &ptdesc->pt_frag_refcount)) { - pagetable_pte_dtor(ptdesc); + pagetable_dtor(ptdesc); pagetable_free(ptdesc); } } @@ -111,7 +111,7 @@ static void pte_free_now(struct rcu_head *head) struct ptdesc *ptdesc; ptdesc = container_of(head, struct ptdesc, pt_rcu_head); - pagetable_pte_dtor(ptdesc); + pagetable_dtor(ptdesc); pagetable_free(ptdesc); } diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h index 3466fbe2e508d..b6793c5c99296 100644 --- a/arch/riscv/include/asm/pgalloc.h +++ b/arch/riscv/include/asm/pgalloc.h @@ -100,7 +100,7 @@ static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pud, if (pgtable_l4_enabled) { struct ptdesc *ptdesc = virt_to_ptdesc(pud); - pagetable_pud_dtor(ptdesc); + pagetable_dtor(ptdesc); riscv_tlb_remove_ptdesc(tlb, ptdesc); } } @@ -111,7 +111,7 @@ static inline void __p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d, if (pgtable_l5_enabled) { struct ptdesc *ptdesc = virt_to_ptdesc(p4d); - pagetable_p4d_dtor(ptdesc); + pagetable_dtor(ptdesc); riscv_tlb_remove_ptdesc(tlb, virt_to_ptdesc(p4d)); } } @@ -144,7 +144,7 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd, { struct ptdesc *ptdesc = virt_to_ptdesc(pmd); - pagetable_pmd_dtor(ptdesc); + pagetable_dtor(ptdesc); riscv_tlb_remove_ptdesc(tlb, ptdesc); } @@ -155,7 +155,7 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, { struct ptdesc *ptdesc = page_ptdesc(pte); - pagetable_pte_dtor(ptdesc); + pagetable_dtor(ptdesc); riscv_tlb_remove_ptdesc(tlb, ptdesc); } #endif /* CONFIG_MMU */ diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index fc53ce748c804..8d703fb51b1dc 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -1558,7 +1558,7 @@ static void __meminit free_pte_table(pte_t *pte_start, pmd_t *pmd) return; } - pagetable_pte_dtor(ptdesc); + pagetable_dtor(ptdesc); if (PageReserved(page)) free_reserved_page(page); else @@ -1580,7 +1580,7 @@ static void __meminit free_pmd_table(pmd_t *pmd_start, pud_t *pud, bool is_vmemm } if (!is_vmemmap) - pagetable_pmd_dtor(ptdesc); + pagetable_dtor(ptdesc); if (PageReserved(page)) free_reserved_page(page); else diff --git a/arch/s390/include/asm/pgalloc.h b/arch/s390/include/asm/pgalloc.h index a0c1ca5d8423c..5fced6d3c36b0 100644 --- a/arch/s390/include/asm/pgalloc.h +++ b/arch/s390/include/asm/pgalloc.h @@ -66,7 +66,7 @@ static inline void p4d_free(struct mm_struct *mm, p4d_t *p4d) if (mm_p4d_folded(mm)) return; - pagetable_p4d_dtor(virt_to_ptdesc(p4d)); + pagetable_dtor(virt_to_ptdesc(p4d)); crst_table_free(mm, (unsigned long *) p4d); } @@ -87,7 +87,7 @@ static inline void pud_free(struct mm_struct *mm, pud_t *pud) if (mm_pud_folded(mm)) return; - pagetable_pud_dtor(virt_to_ptdesc(pud)); + pagetable_dtor(virt_to_ptdesc(pud)); crst_table_free(mm, (unsigned long *) pud); } @@ -109,7 +109,7 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd) { if (mm_pmd_folded(mm)) return; - pagetable_pmd_dtor(virt_to_ptdesc(pmd)); + pagetable_dtor(virt_to_ptdesc(pmd)); crst_table_free(mm, (unsigned long *) pmd); } diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h index b946964afce8e..74b6fba4c2ee3 100644 --- a/arch/s390/include/asm/tlb.h +++ b/arch/s390/include/asm/tlb.h @@ -102,7 +102,7 @@ static inline void pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd, { if (mm_pmd_folded(tlb->mm)) return; - pagetable_pmd_dtor(virt_to_ptdesc(pmd)); + pagetable_dtor(virt_to_ptdesc(pmd)); __tlb_adjust_range(tlb, address, PAGE_SIZE); tlb->mm->context.flush_mm = 1; tlb->freed_tables = 1; @@ -122,7 +122,7 @@ static inline void pud_free_tlb(struct mmu_gather *tlb, pud_t *pud, { if (mm_pud_folded(tlb->mm)) return; - pagetable_pud_dtor(virt_to_ptdesc(pud)); + pagetable_dtor(virt_to_ptdesc(pud)); tlb->mm->context.flush_mm = 1; tlb->freed_tables = 1; tlb->cleared_p4ds = 1; @@ -141,7 +141,7 @@ static inline void p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d, { if (mm_p4d_folded(tlb->mm)) return; - pagetable_p4d_dtor(virt_to_ptdesc(p4d)); + pagetable_dtor(virt_to_ptdesc(p4d)); __tlb_adjust_range(tlb, address, PAGE_SIZE); tlb->mm->context.flush_mm = 1; tlb->freed_tables = 1; diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c index 58696a0c4e4ac..569de24d33761 100644 --- a/arch/s390/mm/pgalloc.c +++ b/arch/s390/mm/pgalloc.c @@ -182,7 +182,7 @@ unsigned long *page_table_alloc(struct mm_struct *mm) static void pagetable_pte_dtor_free(struct ptdesc *ptdesc) { - pagetable_pte_dtor(ptdesc); + pagetable_dtor(ptdesc); pagetable_free(ptdesc); } diff --git a/arch/sh/include/asm/pgalloc.h b/arch/sh/include/asm/pgalloc.h index 5d8577ab15911..96d938fdf2244 100644 --- a/arch/sh/include/asm/pgalloc.h +++ b/arch/sh/include/asm/pgalloc.h @@ -34,7 +34,7 @@ static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, #define __pte_free_tlb(tlb, pte, addr) \ do { \ - pagetable_pte_dtor(page_ptdesc(pte)); \ + pagetable_dtor(page_ptdesc(pte)); \ tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ } while (0) diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c index 21f8cbbd0581c..05882bca5b732 100644 --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -2915,7 +2915,7 @@ static void __pte_free(pgtable_t pte) { struct ptdesc *ptdesc = virt_to_ptdesc(pte); - pagetable_pte_dtor(ptdesc); + pagetable_dtor(ptdesc); pagetable_free(ptdesc); } diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c index 9df51a62333d6..e3a72c884b867 100644 --- a/arch/sparc/mm/srmmu.c +++ b/arch/sparc/mm/srmmu.c @@ -372,7 +372,7 @@ void pte_free(struct mm_struct *mm, pgtable_t ptep) page = pfn_to_page(__nocache_pa((unsigned long)ptep) >> PAGE_SHIFT); spin_lock(&mm->page_table_lock); if (page_ref_dec_return(page) == 1) - pagetable_pte_dtor(page_ptdesc(page)); + pagetable_dtor(page_ptdesc(page)); spin_unlock(&mm->page_table_lock); srmmu_free_nocache(ptep, SRMMU_PTE_TABLE_SIZE); diff --git a/arch/um/include/asm/pgalloc.h b/arch/um/include/asm/pgalloc.h index 04fb4e6969a46..f0af23c3aeb2b 100644 --- a/arch/um/include/asm/pgalloc.h +++ b/arch/um/include/asm/pgalloc.h @@ -27,7 +27,7 @@ extern pgd_t *pgd_alloc(struct mm_struct *); #define __pte_free_tlb(tlb, pte, address) \ do { \ - pagetable_pte_dtor(page_ptdesc(pte)); \ + pagetable_dtor(page_ptdesc(pte)); \ tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ } while (0) @@ -35,7 +35,7 @@ do { \ #define __pmd_free_tlb(tlb, pmd, address) \ do { \ - pagetable_pmd_dtor(virt_to_ptdesc(pmd)); \ + pagetable_dtor(virt_to_ptdesc(pmd)); \ tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pmd)); \ } while (0) @@ -43,7 +43,7 @@ do { \ #define __pud_free_tlb(tlb, pud, address) \ do { \ - pagetable_pud_dtor(virt_to_ptdesc(pud)); \ + pagetable_dtor(virt_to_ptdesc(pud)); \ tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pud)); \ } while (0) diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 3d6e84da45b24..a6cd9660e29ec 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -60,7 +60,7 @@ early_param("userpte", setup_userpte); void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte) { - pagetable_pte_dtor(page_ptdesc(pte)); + pagetable_dtor(page_ptdesc(pte)); paravirt_release_pte(page_to_pfn(pte)); paravirt_tlb_remove_table(tlb, pte); } @@ -77,7 +77,7 @@ void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd) #ifdef CONFIG_X86_PAE tlb->need_flush_all = 1; #endif - pagetable_pmd_dtor(ptdesc); + pagetable_dtor(ptdesc); paravirt_tlb_remove_table(tlb, ptdesc_page(ptdesc)); } @@ -86,7 +86,7 @@ void ___pud_free_tlb(struct mmu_gather *tlb, pud_t *pud) { struct ptdesc *ptdesc = virt_to_ptdesc(pud); - pagetable_pud_dtor(ptdesc); + pagetable_dtor(ptdesc); paravirt_release_pud(__pa(pud) >> PAGE_SHIFT); paravirt_tlb_remove_table(tlb, virt_to_page(pud)); } @@ -96,7 +96,7 @@ void ___p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d) { struct ptdesc *ptdesc = virt_to_ptdesc(p4d); - pagetable_p4d_dtor(ptdesc); + pagetable_dtor(ptdesc); paravirt_release_p4d(__pa(p4d) >> PAGE_SHIFT); paravirt_tlb_remove_table(tlb, virt_to_page(p4d)); } @@ -233,7 +233,7 @@ static void free_pmds(struct mm_struct *mm, pmd_t *pmds[], int count) if (pmds[i]) { ptdesc = virt_to_ptdesc(pmds[i]); - pagetable_pmd_dtor(ptdesc); + pagetable_dtor(ptdesc); pagetable_free(ptdesc); mm_dec_nr_pmds(mm); } @@ -867,7 +867,7 @@ int pud_free_pmd_page(pud_t *pud, unsigned long addr) free_page((unsigned long)pmd_sv); - pagetable_pmd_dtor(virt_to_ptdesc(pmd)); + pagetable_dtor(virt_to_ptdesc(pmd)); free_page((unsigned long)pmd); return 1; diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h index bb482eeca0c3e..4afb346eae255 100644 --- a/include/asm-generic/pgalloc.h +++ b/include/asm-generic/pgalloc.h @@ -109,7 +109,7 @@ static inline void pte_free(struct mm_struct *mm, struct page *pte_page) { struct ptdesc *ptdesc = page_ptdesc(pte_page); - pagetable_pte_dtor(ptdesc); + pagetable_dtor(ptdesc); pagetable_free(ptdesc); } @@ -153,7 +153,7 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd) struct ptdesc *ptdesc = virt_to_ptdesc(pmd); BUG_ON((unsigned long)pmd & (PAGE_SIZE-1)); - pagetable_pmd_dtor(ptdesc); + pagetable_dtor(ptdesc); pagetable_free(ptdesc); } #endif @@ -202,7 +202,7 @@ static inline void __pud_free(struct mm_struct *mm, pud_t *pud) struct ptdesc *ptdesc = virt_to_ptdesc(pud); BUG_ON((unsigned long)pud & (PAGE_SIZE-1)); - pagetable_pud_dtor(ptdesc); + pagetable_dtor(ptdesc); pagetable_free(ptdesc); } @@ -248,7 +248,7 @@ static inline void __p4d_free(struct mm_struct *mm, p4d_t *p4d) struct ptdesc *ptdesc = virt_to_ptdesc(p4d); BUG_ON((unsigned long)p4d & (PAGE_SIZE-1)); - pagetable_p4d_dtor(ptdesc); + pagetable_dtor(ptdesc); pagetable_free(ptdesc); } diff --git a/include/linux/mm.h b/include/linux/mm.h index 5d82f42ddd5cc..cad11fa10c192 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2992,6 +2992,15 @@ static inline bool ptlock_init(struct ptdesc *ptdesc) { return true; } static inline void ptlock_free(struct ptdesc *ptdesc) {} #endif /* defined(CONFIG_SPLIT_PTE_PTLOCKS) */ +static inline void pagetable_dtor(struct ptdesc *ptdesc) +{ + struct folio *folio = ptdesc_folio(ptdesc); + + ptlock_free(ptdesc); + __folio_clear_pgtable(folio); + lruvec_stat_sub_folio(folio, NR_PAGETABLE); +} + static inline bool pagetable_pte_ctor(struct ptdesc *ptdesc) { struct folio *folio = ptdesc_folio(ptdesc); @@ -3003,15 +3012,6 @@ static inline bool pagetable_pte_ctor(struct ptdesc *ptdesc) return true; } -static inline void pagetable_pte_dtor(struct ptdesc *ptdesc) -{ - struct folio *folio = ptdesc_folio(ptdesc); - - ptlock_free(ptdesc); - __folio_clear_pgtable(folio); - lruvec_stat_sub_folio(folio, NR_PAGETABLE); -} - pte_t *___pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp); static inline pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp) @@ -3088,14 +3088,6 @@ static inline bool pmd_ptlock_init(struct ptdesc *ptdesc) return ptlock_init(ptdesc); } -static inline void pmd_ptlock_free(struct ptdesc *ptdesc) -{ -#ifdef CONFIG_TRANSPARENT_HUGEPAGE - VM_BUG_ON_PAGE(ptdesc->pmd_huge_pte, ptdesc_page(ptdesc)); -#endif - ptlock_free(ptdesc); -} - #define pmd_huge_pte(mm, pmd) (pmd_ptdesc(pmd)->pmd_huge_pte) #else @@ -3106,7 +3098,6 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) } static inline bool pmd_ptlock_init(struct ptdesc *ptdesc) { return true; } -static inline void pmd_ptlock_free(struct ptdesc *ptdesc) {} #define pmd_huge_pte(mm, pmd) ((mm)->pmd_huge_pte) @@ -3131,15 +3122,6 @@ static inline bool pagetable_pmd_ctor(struct ptdesc *ptdesc) return true; } -static inline void pagetable_pmd_dtor(struct ptdesc *ptdesc) -{ - struct folio *folio = ptdesc_folio(ptdesc); - - pmd_ptlock_free(ptdesc); - __folio_clear_pgtable(folio); - lruvec_stat_sub_folio(folio, NR_PAGETABLE); -} - /* * No scalability reason to split PUD locks yet, but follow the same pattern * as the PMD locks to make it easier if we decide to. The VM should not be @@ -3167,14 +3149,6 @@ static inline void pagetable_pud_ctor(struct ptdesc *ptdesc) lruvec_stat_add_folio(folio, NR_PAGETABLE); } -static inline void pagetable_pud_dtor(struct ptdesc *ptdesc) -{ - struct folio *folio = ptdesc_folio(ptdesc); - - __folio_clear_pgtable(folio); - lruvec_stat_sub_folio(folio, NR_PAGETABLE); -} - static inline void pagetable_p4d_ctor(struct ptdesc *ptdesc) { struct folio *folio = ptdesc_folio(ptdesc); @@ -3183,14 +3157,6 @@ static inline void pagetable_p4d_ctor(struct ptdesc *ptdesc) lruvec_stat_add_folio(folio, NR_PAGETABLE); } -static inline void pagetable_p4d_dtor(struct ptdesc *ptdesc) -{ - struct folio *folio = ptdesc_folio(ptdesc); - - __folio_clear_pgtable(folio); - lruvec_stat_sub_folio(folio, NR_PAGETABLE); -} - extern void __init pagecache_init(void); extern void free_initmem(void); diff --git a/mm/memory.c b/mm/memory.c index 9423967b24180..ad871e564568b 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -7051,7 +7051,8 @@ bool ptlock_alloc(struct ptdesc *ptdesc) void ptlock_free(struct ptdesc *ptdesc) { - kmem_cache_free(page_ptl_cachep, ptdesc->ptl); + if (ptdesc->ptl) + kmem_cache_free(page_ptl_cachep, ptdesc->ptl); } #endif From patchwork Mon Dec 23 09:40:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13918686 Received: from mail-pf1-f180.google.com (mail-pf1-f180.google.com [209.85.210.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3EDFE1A4AAA for ; Mon, 23 Dec 2024 09:44:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.180 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947088; cv=none; b=gMRRNs6eBitXjn/LbINoiCx3FkJG7Xq7EPPHS+Yt1TynzbfPnoSIMq8t0DXe0gFvuumvgWVNX9zqCLMi2SMNeY5YBmPovGGo6i/Ko8PEqOeyJltSutirxv7N39iunLh2N7oKBxJ95PaYfmrwCu9ScN9YbPpeZAvsGdYy7k7Yfq8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947088; c=relaxed/simple; bh=H/x1u07M3HXF9oVd+o3reuerQgd2pca8MMjlY49GFUc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=nIZ5W54iIzOb9ktLFnwrmJQub7NrTJi++dAAMjiAUZKs9zfOw7AGtHOWjOKloWPOL2epqYzPTBmwFmOPEtYW9lEDVTJz5WQ1ixQraUtixztFtCZZDL/BjOQRoPczNy+YA8VGuJIttIx/LXCCVFSbe8bbMDrZJkXIHPQ5N6KskXQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=WQBEl3A+; arc=none smtp.client-ip=209.85.210.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="WQBEl3A+" Received: by mail-pf1-f180.google.com with SMTP id d2e1a72fcca58-725abf74334so3354392b3a.3 for ; Mon, 23 Dec 2024 01:44:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1734947085; x=1735551885; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qA0mhWnXo6upWemI3POPEc6VJyAkyxFJggkOB6WNq2k=; b=WQBEl3A+mEOA3b5ceCxp8meUWkNWxaDq19onO6zzpsw8B6C1H7Mv2ClbIVM+aXCijy 8OdVyNWPGfWsgQBZ3xX19CbOpxwNzVFXM1mZmPUCeJtUtlONy5pB0Tis1opn9oEk6a1r LyGcy0HD41FxpYv1jLY82v3cmHLPWej9P9aUI++2iOOLA3zpR5pzBf8t6XlrD3ay3qCO VxtSuWlhTz+SqOOdzMiIR046vfUShNoCt9H7Bgxdtps5fHUvcbGkXoY+LflF8sk2gvio 6UhUH25G6Fb4mB2mTaOMD+F5eDM8prZVqjQ7uojLmx6dIG/mjzSEck4qD3ZQWVKIBaOz jt5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734947085; x=1735551885; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qA0mhWnXo6upWemI3POPEc6VJyAkyxFJggkOB6WNq2k=; b=T/Mb5w5OFEW/feRPtQsL5LgLpFF7NWT4ryNeCW9NNvQSxQYvX7Q/Vq9rmS588DZoWI lG8hAkWY3J1AjhaO32nmZ+sHxdm2tQtPMb0lG7HaOlZ9IsOfzmEyC7hAJK+CfrJKORH2 HYEb01NpoexCfmPalEwdpSt66jGZkEN/Y8oVlyEwimtgb8q8pMQJgF1qIgOfzbbPllAX 5dJufgNTi0S85E70NnmZAcBflej7SEW1ALqkY6kRWrM79K4x+iZ85eY7c1m6zXCtmBQ6 NyhUHSML/pOeg0COMIenchtfbCkUXjZ12/+tjuAt0uJqu7yVsU8pmbepBQpIZBJfnPta xwsQ== X-Forwarded-Encrypted: i=1; AJvYcCW0DEY32q8m1Noo0UTwcNxONW/FA4WIRPvUZsAq8TS7vTS5FUlJiH29HNiHxOPe4luxlt10OZiAXQ==@vger.kernel.org X-Gm-Message-State: AOJu0Yzt986uSnjyVqYvoKPR2SLKHibiGQo94BhuVQjOQM63iGKiWx3/ ONGVvE9a6btDOj8/AwIAlzk8EGqweurm0OZ3E9Kyd6nRGMk80Eb1i2Qz1eVLgts= X-Gm-Gg: ASbGncvaRJ76JfdaT1QsoLrdmufuCtCaV278dgonm+Vx/dYa9Z7pWjuM06wmyKWY7lh HFQy4+DdjxnkANTMhbQ2CAgRoB8w1BnqsvtvmTpSnP4JfCLNDcRQWTn7sjf05VRVtwNIiKgyK/e X9/ofQh3IfNR8Evl6RrNUL/RxKSMcTtqEiuKObOaK7F1Iv87ypvv9TQd88F7D+alrvdCmAnnNkN vZlstQp2zL1w/ehJ0ebZJSR7yCC81ytZXRhohBb5ZtYpzoNm+yG99btZs+rTSXXtQZyxTHFyB9z 4pGYNkh73/07gBjiM/MZpw== X-Google-Smtp-Source: AGHT+IEwnxNRxjpY566VNLJW6lRIrSoOOApqv/9DKS3yUbVNE+/391+nOnqyFIwO+T3gR5XgQach6A== X-Received: by 2002:a05:6a00:330b:b0:728:e382:5f14 with SMTP id d2e1a72fcca58-72abdd7bae3mr14443469b3a.9.1734947085429; Mon, 23 Dec 2024 01:44:45 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72aad8486c6sm7468309b3a.85.2024.12.23.01.44.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Dec 2024 01:44:45 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v3 08/17] arm: pgtable: move pagetable_dtor() to __tlb_remove_table() Date: Mon, 23 Dec 2024 17:40:54 +0800 Message-Id: <955162bfbbcd9fbb3b074e1fe2aef4f64b61d6f9.1734945104.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Move pagetable_dtor() to __tlb_remove_table(), so that ptlock and page table pages can be freed together (regardless of whether RCU is used). This prevents the use-after-free problem where the ptlock is freed immediately but the page table pages is freed later via RCU. Page tables shouldn't have swap cache, so use pagetable_free() instead of free_page_and_swap_cache() to free page table pages. Signed-off-by: Qi Zheng Suggested-by: Peter Zijlstra (Intel) Cc: linux-arm-kernel@lists.infradead.org --- arch/arm/include/asm/tlb.h | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/arch/arm/include/asm/tlb.h b/arch/arm/include/asm/tlb.h index ef79bf1e8563f..264ab635e807a 100644 --- a/arch/arm/include/asm/tlb.h +++ b/arch/arm/include/asm/tlb.h @@ -26,12 +26,14 @@ #else /* !CONFIG_MMU */ -#include #include static inline void __tlb_remove_table(void *_table) { - free_page_and_swap_cache((struct page *)_table); + struct ptdesc *ptdesc = (struct ptdesc *)_table; + + pagetable_dtor(ptdesc); + pagetable_free(ptdesc); } #include @@ -41,8 +43,6 @@ __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr) { struct ptdesc *ptdesc = page_ptdesc(pte); - pagetable_dtor(ptdesc); - #ifndef CONFIG_ARM_LPAE /* * With the classic ARM MMU, a pte page has two corresponding pmd @@ -61,7 +61,6 @@ __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr) #ifdef CONFIG_ARM_LPAE struct ptdesc *ptdesc = virt_to_ptdesc(pmdp); - pagetable_dtor(ptdesc); tlb_remove_ptdesc(tlb, ptdesc); #endif } From patchwork Mon Dec 23 09:40:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13918687 Received: from mail-pf1-f174.google.com (mail-pf1-f174.google.com [209.85.210.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 763F71A8F6D for ; Mon, 23 Dec 2024 09:44:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947100; cv=none; b=oBx1vF/xgVDmI/Xy0cBAbgbzuf0PuBolxG+SFkWC0NiXPpJ/+QGIp7wf9JAla3urMz33Swq0dOvfew3JsvgsKt9lHznvYfJYYLHTq1oFYwTcpBmKw9iPsd4ITCVqs6HgG9ZFsmMBGgRqpA9HwOFOvT0dwveQ2wUrLzhcwBDx+jg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947100; c=relaxed/simple; bh=w9Tb6kkg2Q49UGEfUZpSUyo28N/wow0+21NvTYUEzY4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=P4n1J2Jg9HPAH7v1thKXXRBuEMKsjfQdpGxgi/azZOjVVX0fbfR2KyMpLf8npA/e9Pg3tiBn7Iv/EKgTAfU2FP9HpdGsAmo9WcJoRNvSTRdKPQU9BubcNJ0ghyRk8W4TM7MpVt3TiaZ8EnbPR7qSoWDhT6zT3bu2blT78AQBKXM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=IZ3BPeGq; arc=none smtp.client-ip=209.85.210.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="IZ3BPeGq" Received: by mail-pf1-f174.google.com with SMTP id d2e1a72fcca58-728f1e66418so3210250b3a.2 for ; Mon, 23 Dec 2024 01:44:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1734947098; x=1735551898; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=0nJ0bdEaVLB0+13gaCQHdoBfFkuXXT3Pqta7GOjv8Bo=; b=IZ3BPeGqREjYEkMakR8CC2iUnHDDNsFV+SUiQ5Rh+kKC1Voz/zW3kW1AqvJIQr96Pv Pn4vAaQAOdSAN2DwUNM621Wc+nd9U2bHTPDBu7o7MWSd8JcT9Ts8XoPEmWGH86KaQoKP IDr3ALL5a6oA+2iVxE1gqf+cd3Rv7qurRyGZjqj+tAT8FJlNgdyf3GRRbq665CN4Nnkg cdW6OUjO73QBCBhhlKu+9ZAwtkxylw4V8mRXYT53HMP719GfPweSgNPj2yBXG4orTfaS Uj+Gn8qfWjOwXlFS+kJjdbs8jY9SOhOWOZD6mcRzI6kZHuLiYpTq4cWbSXeQFCFdB9y9 zPAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734947098; x=1735551898; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0nJ0bdEaVLB0+13gaCQHdoBfFkuXXT3Pqta7GOjv8Bo=; b=Ns0gJbfwfVznYrJnmXxvdZI/PR4PiilhjIvs9OIRgijohavqAlnxzfBXjSFlDSuc/N XDSnCkpc0m9Ru+zP5wCtLmVz9HKJpiHAUgwwZ3hSB+3SUGxfI4Az29BnQedscMl+2vQ7 sWsOZo+m0ZuRHdPFxbt5M9GOQkJsjRUg0ly0OVJfLWZsm70yd139IVBjKRmLGKiUV+VV RBv3ThzleC5SGweaPuuIc1N3cMjl4T8wJzup5HBROjkomZLC/gwVS4xmi/GP2XcePibO iYOXKVSnRfkOk+WwR7SLaS3BmB/JRQp0+annBi3de4AgueGOpnYC843DMy8kux+khtZj xFVQ== X-Forwarded-Encrypted: i=1; AJvYcCWXwMeJePg3lDtfr6EJH33bPzvOZGAgdSGdI6ogsP8no4agciLlwVUxniHeNQBTcEgAm+elIvGumQ==@vger.kernel.org X-Gm-Message-State: AOJu0YzBO8aTqda8SxCzc2xR6iO6hSr1LtYlvssgANqYCu+jIpnydb/5 I7+hTazt2BXFJ/9jp+CJkBpToMP376hLYpuzqiTLLScAJE4+UhkLtcVabqLY96U= X-Gm-Gg: ASbGncviyPAsQk+kuSbkBidZNlVENZw7mnMRyXqtfJnmTdWbnlwQzUC5wcCLPbiTi2j BYrSdhm2EpXvVNRyEZuP5lXcSKMJzPD4BQZU4GmxmrQF/5WkhxFzoMS3va8CmluOTpKynszQqK3 vYf1Ut5A6y1l9s8U4oLC2QTvnllU6FjwdyPNa5JySiM5ImIhiY9UC38K3Z6bb+bp+HHJD0+g+9A T0NA4AhWR68VYnR2WcD9sFUZEV3YRYC0Bru2Qi6KlPcA3Op2RWg8oA4kSjVLQ2o3FVvSDRYVcTx 5h02Sl20AKGgJEapG+8YWQ== X-Google-Smtp-Source: AGHT+IH27a1l8kT42WYYi2KTFhW6smXjjCcrkYYj/EF7HLDGUXcMfllZ3ayHCCSFU+IsHMUxZvkRwQ== X-Received: by 2002:a05:6a21:6f87:b0:1db:c20f:2c4d with SMTP id adf61e73a8af0-1e5e044b1c1mr19165305637.2.1734947098042; Mon, 23 Dec 2024 01:44:58 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72aad8486c6sm7468309b3a.85.2024.12.23.01.44.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Dec 2024 01:44:57 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v3 09/17] arm64: pgtable: move pagetable_dtor() to __tlb_remove_table() Date: Mon, 23 Dec 2024 17:40:55 +0800 Message-Id: X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Move pagetable_dtor() to __tlb_remove_table(), so that ptlock and page table pages can be freed together (regardless of whether RCU is used). This prevents the use-after-free problem where the ptlock is freed immediately but the page table pages is freed later via RCU. Page tables shouldn't have swap cache, so use pagetable_free() instead of free_page_and_swap_cache() to free page table pages. Signed-off-by: Qi Zheng Suggested-by: Peter Zijlstra (Intel) Cc: linux-arm-kernel@lists.infradead.org --- arch/arm64/include/asm/tlb.h | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index 408d0f36a8a8f..93591a80b5bfb 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -9,11 +9,13 @@ #define __ASM_TLB_H #include -#include static inline void __tlb_remove_table(void *_table) { - free_page_and_swap_cache((struct page *)_table); + struct ptdesc *ptdesc = (struct ptdesc *)_table; + + pagetable_dtor(ptdesc); + pagetable_free(ptdesc); } #define tlb_flush tlb_flush @@ -82,7 +84,6 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, { struct ptdesc *ptdesc = page_ptdesc(pte); - pagetable_dtor(ptdesc); tlb_remove_ptdesc(tlb, ptdesc); } @@ -92,7 +93,6 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, { struct ptdesc *ptdesc = virt_to_ptdesc(pmdp); - pagetable_dtor(ptdesc); tlb_remove_ptdesc(tlb, ptdesc); } #endif @@ -106,7 +106,6 @@ static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pudp, if (!pgtable_l4_enabled()) return; - pagetable_dtor(ptdesc); tlb_remove_ptdesc(tlb, ptdesc); } #endif @@ -120,7 +119,6 @@ static inline void __p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4dp, if (!pgtable_l5_enabled()) return; - pagetable_dtor(ptdesc); tlb_remove_ptdesc(tlb, ptdesc); } #endif From patchwork Mon Dec 23 09:40:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13918688 Received: from mail-pg1-f170.google.com (mail-pg1-f170.google.com [209.85.215.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CAEFC1A8F9C for ; Mon, 23 Dec 2024 09:45:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947114; cv=none; b=A36i578OmN5vf0HCyYjAlNvlGJjzhtTAxQyO6UKYgeZlKKTIUvh9QPaIxglfn6rjI+dUDCPJMOxPYDkYU3a6jueWiQfi+ZgqG5tfd8IaNq0Uis58jYpTXuOv8p2Mkv2lPL/pVgroM8nlb515//TypUgYJgPJew4vFuUFZQqyxYc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947114; c=relaxed/simple; bh=d8h9y7EmWepeeSdJqmfwFJOw7y0NPJCnETYa2k/Hjdk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=siBduMbL5DwXbpBiq+1lOWkqy6SdAYLCX8lqwKHab5umclpyV0/oKgzFd655hkrUKRt4YvcPNVBvaNUTHlJc0P9AHeAO7zXj1xHRrIXXU0o49L6gBgzsGRyIxQ4qIyhfKaRNm/hUBO7Ow8Akh26GBc+8+YDd7YQIrIvmL0Xhmcw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=hhjTfdDL; arc=none smtp.client-ip=209.85.215.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="hhjTfdDL" Received: by mail-pg1-f170.google.com with SMTP id 41be03b00d2f7-801c8164ef9so2836054a12.1 for ; Mon, 23 Dec 2024 01:45:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1734947111; x=1735551911; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=76YtbImY0jPwdXnQQVs9oYKl35yZZ81uc4A2YpilpOk=; b=hhjTfdDL3hINzsNEje4Uab3t/9o3lF3NwnHmauQcG6QyJEXAN1mfyIP3FpalrADjIM kMXwgQzXjXxRgEnMr2xO25U9mhoIFHt0uVhuGA9d9hleDd7HlCWWtwY4WP97N6z8UcBv OV8ke5AyomJc6s9VA7lDdM/2LTxhikkiz99tmTsDV/45gqL/ch+Rv+6nMCQJ8J2XPhd3 iCcT+QevH9qr6oraNxgGL5AuXfHoJXOOQOZoG6oGcZMae4y4JRtKMzzaz5f7aWtHndH/ mpC7nU0FjkoQf7lyiXhKDRvTltjNoCUcvAoKQYr65rSuKd79UWsIKyaFoj1IHacxqBUG ckgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734947111; x=1735551911; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=76YtbImY0jPwdXnQQVs9oYKl35yZZ81uc4A2YpilpOk=; b=UMEnB4Sgo9S7LSf7+WXFOveAgYyD7xyOjYLN146X4qsPzD0ycplUJR0w1kMb0+nDEy AaUhA7NBTqmfTwA/y4FxhoHBjj03eGfxeVcXPoycdiMZTL8jHaZjRUzK/tqomcNdkPCQ 0p31P2IuBCAPtBLCKzFh1ChXj2icdK9CHN/fUnKtltlV+NvKuoOqr4r3zOL4pjbUCKwV PqcbzbVv0D4DC0h6xihXrPJjgQvAc3QpRKK0CjcIhg315V1tp5oMByB9CG750cjPoTJH ny0DyeyuYfv7+JyFBgp0TGBX4Q1h3omFXlKItPMv86L0awVPXCxkFfu0D/7YVIvtLbAI 7g2g== X-Forwarded-Encrypted: i=1; AJvYcCUg/YQw+uBSuoYyZu+KgyO4Q2driQUiBNyaZUlBDzE2vkyjHw91mBbuqwUgHk1XSrNVAVZpVmdy5A==@vger.kernel.org X-Gm-Message-State: AOJu0YwjbMR6HY93HZWs/PR3si61V+OkioszfISt7aFCZLc6y+aSV9Ux jOjSloegljrnlic5G2t7rixmeLWjMuTcorRsVCJBzW77nSs1WaRa/cEZwEdpO0Q= X-Gm-Gg: ASbGncvqF97aR6ODTDO1M51Dn8/nrQx/Voo7aq/PZYTffx+031X+W68MYGFQBH+xLEZ 2oGZhKXfjU4gUZU5obQNEgwXfBD7nJbbCibRaII6gCrA03AAr06UrYcr8xmg7wl0vIO4ULQvvST aqmzqwAm7hFzsdTShi6YhalYivhfMpdnmnoDE51tBr4qCpqIou0JMUp22n/8uued3rQzhWdZJa8 bWdymuB0qnOzo80+nd8cktvKibwhk5rrvASkFbHtp27ai4lmO2s89AmfV2Sj2RuC3L94KWnUQQw 0qaBLi4Kkd/kdcV6qF12QA== X-Google-Smtp-Source: AGHT+IFO8x3oURIM+5JbbGjULQ8gSrXSXvX9ZJiJMgilE8gLJBNqFvKuKmavbyH4ASOcC8EYHQP7xQ== X-Received: by 2002:a05:6a20:1593:b0:1e1:ffec:b1a9 with SMTP id adf61e73a8af0-1e5c6ec6f11mr24851906637.3.1734947111156; Mon, 23 Dec 2024 01:45:11 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72aad8486c6sm7468309b3a.85.2024.12.23.01.44.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Dec 2024 01:45:10 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v3 10/17] riscv: pgtable: move pagetable_dtor() to __tlb_remove_table() Date: Mon, 23 Dec 2024 17:40:56 +0800 Message-Id: <0e8f0b3835c15e99145e0006ac1020ae45a2b166.1734945104.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Move pagetable_dtor() to __tlb_remove_table(), so that ptlock and page table pages can be freed together (regardless of whether RCU is used). This prevents the use-after-free problem where the ptlock is freed immediately but the page table pages is freed later via RCU. Page tables shouldn't have swap cache, so use pagetable_free() instead of free_page_and_swap_cache() to free page table pages. By the way, move the comment above __tlb_remove_table() to riscv_tlb_remove_ptdesc(), it will be more appropriate. Signed-off-by: Qi Zheng Suggested-by: Peter Zijlstra (Intel) Cc: linux-riscv@lists.infradead.org --- arch/riscv/include/asm/pgalloc.h | 38 ++++++++++++++------------------ arch/riscv/include/asm/tlb.h | 14 ++++-------- 2 files changed, 21 insertions(+), 31 deletions(-) diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h index b6793c5c99296..c8907b8317115 100644 --- a/arch/riscv/include/asm/pgalloc.h +++ b/arch/riscv/include/asm/pgalloc.h @@ -15,12 +15,22 @@ #define __HAVE_ARCH_PUD_FREE #include +/* + * While riscv platforms with riscv_ipi_for_rfence as true require an IPI to + * perform TLB shootdown, some platforms with riscv_ipi_for_rfence as false use + * SBI to perform TLB shootdown. To keep software pagetable walkers safe in this + * case we switch to RCU based table free (MMU_GATHER_RCU_TABLE_FREE). See the + * comment below 'ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE' in include/asm-generic/tlb.h + * for more details. + */ static inline void riscv_tlb_remove_ptdesc(struct mmu_gather *tlb, void *pt) { - if (riscv_use_sbi_for_rfence()) + if (riscv_use_sbi_for_rfence()) { tlb_remove_ptdesc(tlb, pt); - else + } else { + pagetable_dtor(pt); tlb_remove_page_ptdesc(tlb, pt); + } } static inline void pmd_populate_kernel(struct mm_struct *mm, @@ -97,23 +107,15 @@ static inline void pud_free(struct mm_struct *mm, pud_t *pud) static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pud, unsigned long addr) { - if (pgtable_l4_enabled) { - struct ptdesc *ptdesc = virt_to_ptdesc(pud); - - pagetable_dtor(ptdesc); - riscv_tlb_remove_ptdesc(tlb, ptdesc); - } + if (pgtable_l4_enabled) + riscv_tlb_remove_ptdesc(tlb, virt_to_ptdesc(pud)); } static inline void __p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d, unsigned long addr) { - if (pgtable_l5_enabled) { - struct ptdesc *ptdesc = virt_to_ptdesc(p4d); - - pagetable_dtor(ptdesc); + if (pgtable_l5_enabled) riscv_tlb_remove_ptdesc(tlb, virt_to_ptdesc(p4d)); - } } #endif /* __PAGETABLE_PMD_FOLDED */ @@ -142,10 +144,7 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm) static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd, unsigned long addr) { - struct ptdesc *ptdesc = virt_to_ptdesc(pmd); - - pagetable_dtor(ptdesc); - riscv_tlb_remove_ptdesc(tlb, ptdesc); + riscv_tlb_remove_ptdesc(tlb, virt_to_ptdesc(pmd)); } #endif /* __PAGETABLE_PMD_FOLDED */ @@ -153,10 +152,7 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd, static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr) { - struct ptdesc *ptdesc = page_ptdesc(pte); - - pagetable_dtor(ptdesc); - riscv_tlb_remove_ptdesc(tlb, ptdesc); + riscv_tlb_remove_ptdesc(tlb, page_ptdesc(pte)); } #endif /* CONFIG_MMU */ diff --git a/arch/riscv/include/asm/tlb.h b/arch/riscv/include/asm/tlb.h index 1f6c38420d8e0..ded8724b3c4f7 100644 --- a/arch/riscv/include/asm/tlb.h +++ b/arch/riscv/include/asm/tlb.h @@ -11,19 +11,13 @@ struct mmu_gather; static void tlb_flush(struct mmu_gather *tlb); #ifdef CONFIG_MMU -#include -/* - * While riscv platforms with riscv_ipi_for_rfence as true require an IPI to - * perform TLB shootdown, some platforms with riscv_ipi_for_rfence as false use - * SBI to perform TLB shootdown. To keep software pagetable walkers safe in this - * case we switch to RCU based table free (MMU_GATHER_RCU_TABLE_FREE). See the - * comment below 'ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE' in include/asm-generic/tlb.h - * for more details. - */ static inline void __tlb_remove_table(void *table) { - free_page_and_swap_cache(table); + struct ptdesc *ptdesc = (struct ptdesc *)table; + + pagetable_dtor(ptdesc); + pagetable_free(ptdesc); } #endif /* CONFIG_MMU */ From patchwork Mon Dec 23 09:40:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13918689 Received: from mail-pf1-f180.google.com (mail-pf1-f180.google.com [209.85.210.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7756618DF6E for ; Mon, 23 Dec 2024 09:45:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.180 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947127; cv=none; b=dtExoEvLutPiHVRxA/LClwiQrDyuuiTvUWEbcBWKrBq1lUE27bmlQyha7na+xDY5JCKQbZ+mVB9ZxDLywRuZm451HVbmLj1gfCVmaakztGcXEXjC1GDUWK/GWCifbshN4b7UsFbbKlDY7BSpkN/vr166GgeDjLv8bn3fxOge5gc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947127; c=relaxed/simple; bh=0DrkqJj4wQeX5ghzMY0LbiVqPzUVuk7cM17qWFCGuf4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Lg1urv5GyCgZ9Ix9wUEU+UfCfA/bnzLgPPH2UmfppI14sOI8ry99oeiqFd0oTjx7LqF5k1SffcxMaPgipApRNYRub+IxfAu68hf4bKiNpReWhWsazGDwgg5taVpP+9MKOAlcAOwA05iVBVMilD9JZU1e0KpuPgAbhfUTHZaI+ww= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=ILpgUIOu; arc=none smtp.client-ip=209.85.210.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="ILpgUIOu" Received: by mail-pf1-f180.google.com with SMTP id d2e1a72fcca58-725ee6f56b4so3378067b3a.3 for ; Mon, 23 Dec 2024 01:45:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1734947124; x=1735551924; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Rs6Lkouarv58ZoKb+WtHLVo9G63kikGgJGrD7ursbxc=; b=ILpgUIOu+Wy8FrgzTSZB2mk8ABhOh3ROj0ny4EM/VImeHeg3lJCoT9MA4LnQ8TJ3Mt 42CUUskgnSW9ZXJdZiBmDQQ/1vn/TsDhK8v310d6wOmM8IpIJma7j8mffuCd8CpbI4bI 2XQvgaelaRcDu2W0UKkvlndfnyON7c7liAf1fuotkNGDw5cUce7wAiaufBXLKr3EmzjP EW/dFNHQ+2oPSBZVoebTsQRZDtv2wjLyg51eKwCn0HwGehZN8qW4DGv4CPWb1nBECRn+ Ksl8TDRC0Ypq6jZc3WOJh73FCruKIQ8YN6cU3g4YCKzwSg5fGfTpEbQdfdx1rthDj7dk M6WQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734947124; x=1735551924; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Rs6Lkouarv58ZoKb+WtHLVo9G63kikGgJGrD7ursbxc=; b=X1bGqgbzPPgrz4qKSV7DkcanX7KUOBhVPx3rtiRWy2CB94NcTcjPoj2RU0Nx/WUfvr tO/reCo50ylafKwp1WT5D+HW7P6W43vpsrOehM65VdBxuioS8mp7BMD7NJwTCEEPsvSe Zwoju+pu339OLyvTnfeh+mFuxgKK8VgGmfRD0HF9wsiPh109aLKCf2bs1mCQg17Z7Hhn FnNsK+HWcqPWVnIdaK7zYk5NBARibncQJUobadGIONJUBwvf3KiEAoMfP3Z5+tWo+Gz9 Ojp8dAKIHwV8pwYVnsA+Foi8Ba0eX1KWPGWaY0wA2VBRd1AZ4MWbgq/Wwl2fMnZ+yIPX +Ruw== X-Forwarded-Encrypted: i=1; AJvYcCUqmi0uTuVnzFAsYuEi+9glqeXfbJW94WwOom0RL9GprgGSrUesGbYLDj0OeTuM+rIjjzyj5flOdw==@vger.kernel.org X-Gm-Message-State: AOJu0Yw7GBNC5QOCHdoToEn9KED0sH+zyZ+4M/Y1l8a1Arzx7Y7rg2Hc Oifj0Ta+nTtdOqgrBnkCQxTDyfMSPY2z++8pr9vuHsIcWYbaowyXMIbnPckxhXE= X-Gm-Gg: ASbGncsa2wxZD9T8ghOvF5F6YAcWs87m2ytWSAP57fxImGNf5QWtQbH8usw6LHsEH1b IC4TYG87erHhkk189r+bgze5fHM13730c61cHQDrolek5PWaEC1L8Yvgtq5i6AJwRJoScmI91Qi apcZvhiFyeNrtRNpZvP13uvKdxGBJUK56tl6QyOejKShDWOnsiK4ZZo69Xa/NTmNGCCAkiLO3eS AgRrc3Ja3RkRzreGvl0PfxNng3jhB3jG95lkv+n3dGN7apgL7SOFF3/NnqHQuMphUk8JytMA6Mx CFt6iUwJL7FaPAnsQ8IsNw== X-Google-Smtp-Source: AGHT+IEsUkA9RBo0Vmvp6m94/gMVFQ4PHl6SoN42Z+rLx+q01B5oEKa1sTTPOWHs1mFCNObFPbuRxA== X-Received: by 2002:a05:6a20:6a25:b0:1d9:18af:d150 with SMTP id adf61e73a8af0-1e5e05a9e39mr19676430637.21.1734947123919; Mon, 23 Dec 2024 01:45:23 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72aad8486c6sm7468309b3a.85.2024.12.23.01.45.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Dec 2024 01:45:23 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v3 11/17] x86: pgtable: move pagetable_dtor() to __tlb_remove_table() Date: Mon, 23 Dec 2024 17:40:57 +0800 Message-Id: <0dc5a3bf5a692e24379c1d3b879a6d4396f0dbbd.1734945104.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Move pagetable_dtor() to __tlb_remove_table(), so that ptlock and page table pages can be freed together (regardless of whether RCU is used). This prevents the use-after-free problem where the ptlock is freed immediately but the page table pages is freed later via RCU. Page tables shouldn't have swap cache, so use pagetable_free() instead of free_page_and_swap_cache() to free page table pages. Signed-off-by: Qi Zheng Suggested-by: Peter Zijlstra (Intel) Cc: x86@kernel.org --- arch/x86/include/asm/tlb.h | 17 ++++++++++------- arch/x86/kernel/paravirt.c | 1 + arch/x86/mm/pgtable.c | 12 ++---------- 3 files changed, 13 insertions(+), 17 deletions(-) diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h index 73f0786181cc9..f64730be5ad67 100644 --- a/arch/x86/include/asm/tlb.h +++ b/arch/x86/include/asm/tlb.h @@ -31,24 +31,27 @@ static inline void tlb_flush(struct mmu_gather *tlb) */ static inline void __tlb_remove_table(void *table) { - free_page_and_swap_cache(table); + struct ptdesc *ptdesc = (struct ptdesc *)table; + + pagetable_dtor(ptdesc); + pagetable_free(ptdesc); } #ifdef CONFIG_PT_RECLAIM static inline void __tlb_remove_table_one_rcu(struct rcu_head *head) { - struct page *page; + struct ptdesc *ptdesc; - page = container_of(head, struct page, rcu_head); - put_page(page); + ptdesc = container_of(head, struct ptdesc, pt_rcu_head); + __tlb_remove_table(ptdesc); } static inline void __tlb_remove_table_one(void *table) { - struct page *page; + struct ptdesc *ptdesc; - page = table; - call_rcu(&page->rcu_head, __tlb_remove_table_one_rcu); + ptdesc = table; + call_rcu(&ptdesc->pt_rcu_head, __tlb_remove_table_one_rcu); } #define __tlb_remove_table_one __tlb_remove_table_one #endif /* CONFIG_PT_RECLAIM */ diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c index 7bdcf152778c0..46d5d325483b0 100644 --- a/arch/x86/kernel/paravirt.c +++ b/arch/x86/kernel/paravirt.c @@ -62,6 +62,7 @@ void __init native_pv_lock_init(void) #ifndef CONFIG_PT_RECLAIM static void native_tlb_remove_table(struct mmu_gather *tlb, void *table) { + pagetable_dtor(table); tlb_remove_page(tlb, table); } #else diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index a6cd9660e29ec..a0b0e501ba663 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -23,6 +23,7 @@ EXPORT_SYMBOL(physical_mask); static inline void paravirt_tlb_remove_table(struct mmu_gather *tlb, void *table) { + pagetable_dtor(table); tlb_remove_page(tlb, table); } #else @@ -60,7 +61,6 @@ early_param("userpte", setup_userpte); void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte) { - pagetable_dtor(page_ptdesc(pte)); paravirt_release_pte(page_to_pfn(pte)); paravirt_tlb_remove_table(tlb, pte); } @@ -68,7 +68,6 @@ void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte) #if CONFIG_PGTABLE_LEVELS > 2 void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd) { - struct ptdesc *ptdesc = virt_to_ptdesc(pmd); paravirt_release_pmd(__pa(pmd) >> PAGE_SHIFT); /* * NOTE! For PAE, any changes to the top page-directory-pointer-table @@ -77,16 +76,12 @@ void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd) #ifdef CONFIG_X86_PAE tlb->need_flush_all = 1; #endif - pagetable_dtor(ptdesc); - paravirt_tlb_remove_table(tlb, ptdesc_page(ptdesc)); + paravirt_tlb_remove_table(tlb, virt_to_page(pmd)); } #if CONFIG_PGTABLE_LEVELS > 3 void ___pud_free_tlb(struct mmu_gather *tlb, pud_t *pud) { - struct ptdesc *ptdesc = virt_to_ptdesc(pud); - - pagetable_dtor(ptdesc); paravirt_release_pud(__pa(pud) >> PAGE_SHIFT); paravirt_tlb_remove_table(tlb, virt_to_page(pud)); } @@ -94,9 +89,6 @@ void ___pud_free_tlb(struct mmu_gather *tlb, pud_t *pud) #if CONFIG_PGTABLE_LEVELS > 4 void ___p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d) { - struct ptdesc *ptdesc = virt_to_ptdesc(p4d); - - pagetable_dtor(ptdesc); paravirt_release_p4d(__pa(p4d) >> PAGE_SHIFT); paravirt_tlb_remove_table(tlb, virt_to_page(p4d)); } From patchwork Mon Dec 23 09:40:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13918690 Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2B733194AC7 for ; Mon, 23 Dec 2024 09:45:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947139; cv=none; b=MiCbktlHs+e7y0IidDlJT4RxgnMcQIieiXdqTPTUwBG86p4luDy27Ue2OY7ZhKFjK6pN7iuMqj9MzgPItUNSXrrrHVh8on9TqCJeKPX9Vjm8TGDQkQxR9mGLwZO7lGt8Veh9xXd4x8Vm3vi8GcnTGy+ox5yDW1lGQXT3/JRX/zQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947139; c=relaxed/simple; bh=ySnIl2i3Fo/t1YN/rcwyhGqxWwlx1TNUJRz33ZWxlkE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=VlDIjXeAQEAccCB0Wke86+qRkAIjU5UgPF0K91eW58dsibpK7PPyzteRpco6tZIjhVSulXN7Kzj/IaIqBlIIneIvNRHP6M8EAtpLZ+GZZsHYZD/Xh8eAsZ4+Ct1tN/5MRI2VeSLRp3xLVD05YBwKMA1yhoYLDmPu0g0mZxBzxeo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=hNXfuXhw; arc=none smtp.client-ip=209.85.210.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="hNXfuXhw" Received: by mail-pf1-f176.google.com with SMTP id d2e1a72fcca58-728f337a921so4004070b3a.3 for ; Mon, 23 Dec 2024 01:45:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1734947136; x=1735551936; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jyEzElX2/Cl2Hb0vACePBo9dDp7jwROhL3d0Mq9f7ig=; b=hNXfuXhwOhMw48dr3r+kXqLcQkoMoJLhWGchERDwoRDWksmqZlRFv+Oz9JnGvA2DC0 EEdOZUaAluEAmL2UkJgPwm8D00T+9KbKF3ayO9AR3J2xXA7CFtbfj+ein9qm2KdFiutZ Q3WBGuzlwc2HK1qfSDR+g87FWzXgYvdIjSJFMtD0+D+M2nu2F7k/P/ZF5SMuhvz+4n3x 1C1NTbrsci9IoG5KhcKoTYYafOYim+JLiM5lnFlmQBUdihZqpyQmx86jTa6RnYhMLDMN ZOTOExH3hfPksb5AY+fMCK1ICcynq6/VHeJ6UvnoyOUJaarTElG5s3Cm16dwoNsZo+XA 40kA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734947136; x=1735551936; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jyEzElX2/Cl2Hb0vACePBo9dDp7jwROhL3d0Mq9f7ig=; b=rz7nrWFtDWf2TASw1DyOyfw146oFHZlL/8ZBIJXOD3kI+UxiuqiJK9jMstHwNZXzuu i2i2QJZo7XEDELrQEJV5KW4mMuGOZsO9h0QcIPOjmwrjR7VDVc6sBI0R5Rcp3C20TX8r 06kCmE9WjDOYZHfZ3XeCngL6hhLgCW612WWcXIAPapa/IQ5RdS+CcsZ9vvOYqxcAhGvF KcdviwMmYlli/f5URm9aRwmV+mfmRoZoEk/pzl6sunN7J37J+GRo4pntu9CZe9JEBKMp 13T1b+ZsNNsSa+iQHpj9XZg5yTvi+wo8K3NH7pQxMrxmjz7b7OJlHeFWTaq/WYQhPzTx PvMg== X-Forwarded-Encrypted: i=1; AJvYcCU3xvCrp+A0XokZPiRlp+x6PqDiGJQP55chw3qChyAUE04gLbba3Werx5F8Amhh3QYak4L+VLUCQg==@vger.kernel.org X-Gm-Message-State: AOJu0YycUVd0JDZf9uDZpXzIe/2iDAFbhC4gAnON4r3izrpm6AMbedTN IqOlPDHiw51ziNfYUTlPHTG9OzPZYPD4ay4LJpuUk9OZ/sEZb7081c03M0pOKws= X-Gm-Gg: ASbGncsb7M8zihOCftXZwpNmYNWed8Ge9gTLjfWH+ZgiK7ON+bn/KSrg6lTVPBN3CDR bxMWrCvmMeF3KlkSqgyKjPX0uO5yjoRdMgg7bFYnlxdqELh8EaOa8pPwRimisjEvHjWz+O4IJlf tVSONlciZsZZ4ou/2CEQ4scQkXNG52WHd9x8/MZgrUVqBT/1cNP1QcY6xwqrxlqIgIl7LV05rZX xU6lA0xCKkWJu/cKvYiUTW5E3UOMr7PHqlXf2HHILncInWuOgDvmG7qch6qleA1I2M2VaDJKxq9 NF3x/U+jbxYAqrxOcMkbkQ== X-Google-Smtp-Source: AGHT+IFFxsa0Y8D8QMIcsR8+VA1yaCfovsoDpuS31clip0UDjebZ+1TP8q/TeJDt5eTrdc0wzKH4iQ== X-Received: by 2002:a05:6a00:399a:b0:725:b7dd:e668 with SMTP id d2e1a72fcca58-72abdebb868mr13954007b3a.17.1734947136545; Mon, 23 Dec 2024 01:45:36 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72aad8486c6sm7468309b3a.85.2024.12.23.01.45.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Dec 2024 01:45:36 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v3 12/17] s390: pgtable: also move pagetable_dtor() of PxD to __tlb_remove_table() Date: Mon, 23 Dec 2024 17:40:58 +0800 Message-Id: X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 To unify the PxD and PTE TLB free path, also move the pagetable_dtor() of PMD|PUD|P4D to __tlb_remove_table(). Signed-off-by: Qi Zheng Suggested-by: Peter Zijlstra (Intel) Cc: linux-s390@vger.kernel.org --- arch/s390/include/asm/tlb.h | 3 --- arch/s390/mm/pgalloc.c | 14 ++++---------- 2 files changed, 4 insertions(+), 13 deletions(-) diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h index 74b6fba4c2ee3..79df7c0932c56 100644 --- a/arch/s390/include/asm/tlb.h +++ b/arch/s390/include/asm/tlb.h @@ -102,7 +102,6 @@ static inline void pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd, { if (mm_pmd_folded(tlb->mm)) return; - pagetable_dtor(virt_to_ptdesc(pmd)); __tlb_adjust_range(tlb, address, PAGE_SIZE); tlb->mm->context.flush_mm = 1; tlb->freed_tables = 1; @@ -122,7 +121,6 @@ static inline void pud_free_tlb(struct mmu_gather *tlb, pud_t *pud, { if (mm_pud_folded(tlb->mm)) return; - pagetable_dtor(virt_to_ptdesc(pud)); tlb->mm->context.flush_mm = 1; tlb->freed_tables = 1; tlb->cleared_p4ds = 1; @@ -141,7 +139,6 @@ static inline void p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d, { if (mm_p4d_folded(tlb->mm)) return; - pagetable_dtor(virt_to_ptdesc(p4d)); __tlb_adjust_range(tlb, address, PAGE_SIZE); tlb->mm->context.flush_mm = 1; tlb->freed_tables = 1; diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c index 569de24d33761..c73b89811a264 100644 --- a/arch/s390/mm/pgalloc.c +++ b/arch/s390/mm/pgalloc.c @@ -180,7 +180,7 @@ unsigned long *page_table_alloc(struct mm_struct *mm) return table; } -static void pagetable_pte_dtor_free(struct ptdesc *ptdesc) +static void pagetable_dtor_free(struct ptdesc *ptdesc) { pagetable_dtor(ptdesc); pagetable_free(ptdesc); @@ -190,20 +190,14 @@ void page_table_free(struct mm_struct *mm, unsigned long *table) { struct ptdesc *ptdesc = virt_to_ptdesc(table); - pagetable_pte_dtor_free(ptdesc); + pagetable_dtor_free(ptdesc); } void __tlb_remove_table(void *table) { struct ptdesc *ptdesc = virt_to_ptdesc(table); - struct page *page = ptdesc_page(ptdesc); - if (compound_order(page) == CRST_ALLOC_ORDER) { - /* pmd, pud, or p4d */ - pagetable_free(ptdesc); - return; - } - pagetable_pte_dtor_free(ptdesc); + pagetable_dtor_free(ptdesc); } #ifdef CONFIG_TRANSPARENT_HUGEPAGE @@ -211,7 +205,7 @@ static void pte_free_now(struct rcu_head *head) { struct ptdesc *ptdesc = container_of(head, struct ptdesc, pt_rcu_head); - pagetable_pte_dtor_free(ptdesc); + pagetable_dtor_free(ptdesc); } void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable) From patchwork Mon Dec 23 09:40:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13918691 Received: from mail-pf1-f169.google.com (mail-pf1-f169.google.com [209.85.210.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DA764194C67 for ; Mon, 23 Dec 2024 09:45:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947151; cv=none; b=qpBWgR/57JHot2PfCOTSQlQ5DcaGpK/Dv0k2GYhCnU33NoQTjJ7O4KO9UtQaXvEN/whOSJh5UgpdKw6KUh//WKBlRPxLyzof+5/n9WN4UrGlUfpBFH7CENFf0d10lbHcRSZI9V7Sn2ECFACzs1mPozpBnt3guzog6vYqYsgCozU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947151; c=relaxed/simple; bh=EElIB1gpaaG9REY1IEzvLAKdR3Au6Zf5cYxlQP4u0WQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=csih75TFBhliMiKzQrlPLC3sUSh2nSSQXC8yz79IMlph2c7zjKPZCoxOWLEfT1wj1uNm2hEwP+QMsjsMrjsTfRADQC4oj8ElUYm2lWdP2I/IVesH1GCjLsgDa0lrxV74BzxmT/XCfAn8d3172Jbuqab3pSdshQlPpqcUS8TTXBc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=RvnFhilM; arc=none smtp.client-ip=209.85.210.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="RvnFhilM" Received: by mail-pf1-f169.google.com with SMTP id d2e1a72fcca58-72909c459c4so3086289b3a.1 for ; Mon, 23 Dec 2024 01:45:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1734947149; x=1735551949; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=m/+0uIvp7xUeTeFwO63TH1y++vtASeDM7X35qnuktEM=; b=RvnFhilMwMdm89KRWIGPdeFKH134ayho2vhrK4zga2b2j4N4QodaBfBh2+ytH9OdqS jsBrfOMGRRCbnE1C8SwbOfVntYK+8QrlDqpjkuo4iJCmGY9qrIZ/XYvtM38g4PFDSZg4 UQpx8GmGCZ6WXg4Nyk8cnR4fQtiKS8D1Zacq/RedZMm92E15M0U18Zj4LWSkMY0jsxj1 RZbSBvInUsoGVvHivopMJK1HW0z4Rtyz1CdG7IVQvlXBSABkLCtEEm5IW2Vdfq+yAnpl 76lIqGhDUL0QgBY4YOECrQU3aXB3bF4jEBG4Yj1b/b+/JYhWN/7NVW8UeU5NTXrmwo+Y 5FoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734947149; x=1735551949; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=m/+0uIvp7xUeTeFwO63TH1y++vtASeDM7X35qnuktEM=; b=UKUEMrp7IRxgoBLIwX34Le4V7fzZSGGUGWyT2mf7ikNZP4rxfNy6rOJb+6MCgRZyDz MKpYjKJrZhNoDVWocjsx16KDgtYHTFWmHXc+XP7RsPPic8Zt7LYUa2klZWQc5KjRJVU4 m7JgMiIsxlAXuOhScw+E0t+NAEuQezTliPFJLQhQaTIiBzh1CiLRrww/zv93CfhfVsyL EjUMbsHqznx4SxvQmh6e1hyuQzml8lTSXQ//UX8hAq8x/E3OcK/yUNlxSaJ4H3fRBkP1 IsV0pLL17H7Q3/hojivwPsRuuEAkt4kO5VP9GZwa1rjrAXb9+R2IoEmZjLShSlDecCNB MYKg== X-Forwarded-Encrypted: i=1; AJvYcCX4f0EX+MjXkEkfqFrt9/PgzYo2TEdGVaDRjX/hYDQ5vjyskhY0yeZvMFanpUbaTCVtyORZvASEnw==@vger.kernel.org X-Gm-Message-State: AOJu0Ywzfb05AdcKgYE5ZJ0UeJX0kZAudOmgfU3rrnYh7TSfNDv1owp6 cJL1+Ed1xGgT4mp5XJLFSjGZRA7AGKLgGJ3ZjYnDGZ9iZwZO7W9uwyK708zkALQ= X-Gm-Gg: ASbGncv5Iw3b84871ZWDyZV4V3/g5vQxkmiGQYSWPjgVJqyqOGSBBVbmT3QW0WJTq2h pmLYDl2TKdFYgLriApnqhUKq9pzSTO8eyGhGmtis5ppJ7qCcHIbrVDEQpGx7pezHLyZGomg+bUs qGlOZXgbyYWyNxyIb+ZOV4RnNUnkf1Ge1fczgHmZOMyF3zGDlzJTChxRw06HdBseGX1cENxKGKC sPvY5s86wFE+4IGaTeLNArz5RhsMh/SAGWb/fHAenOXOdr3FTwjncRvrNppEZooLCJBSW/QVTh2 oKEmO7xtWvDODbysXe9bww== X-Google-Smtp-Source: AGHT+IHlutX536vagq2BMCvNjirBm1uJlF1dzaH3GgOc0ZHj3lAXm7kK/AuGYqDlsmkXqpl9hOqJ3g== X-Received: by 2002:a05:6a00:92a4:b0:728:e1f9:b680 with SMTP id d2e1a72fcca58-72abdd7ac89mr17103004b3a.6.1734947149240; Mon, 23 Dec 2024 01:45:49 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72aad8486c6sm7468309b3a.85.2024.12.23.01.45.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Dec 2024 01:45:48 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v3 13/17] mm: pgtable: introduce generic __tlb_remove_table() Date: Mon, 23 Dec 2024 17:40:59 +0800 Message-Id: X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Several architectures (arm, arm64, riscv and x86) define exactly the same __tlb_remove_table(), just introduce generic __tlb_remove_table() to eliminate these duplications. The s390 __tlb_remove_table() is nearly the same, so also make s390 __tlb_remove_table() version generic. Signed-off-by: Qi Zheng --- arch/arm/include/asm/tlb.h | 9 --------- arch/arm64/include/asm/tlb.h | 7 ------- arch/powerpc/include/asm/tlb.h | 1 + arch/riscv/include/asm/tlb.h | 12 ------------ arch/s390/include/asm/tlb.h | 9 ++++----- arch/s390/mm/pgalloc.c | 7 ------- arch/sparc/include/asm/tlb_32.h | 1 + arch/sparc/include/asm/tlb_64.h | 1 + arch/x86/include/asm/tlb.h | 17 ----------------- include/asm-generic/tlb.h | 15 +++++++++++++-- 10 files changed, 20 insertions(+), 59 deletions(-) diff --git a/arch/arm/include/asm/tlb.h b/arch/arm/include/asm/tlb.h index 264ab635e807a..ea4fbe7b17f6f 100644 --- a/arch/arm/include/asm/tlb.h +++ b/arch/arm/include/asm/tlb.h @@ -27,15 +27,6 @@ #else /* !CONFIG_MMU */ #include - -static inline void __tlb_remove_table(void *_table) -{ - struct ptdesc *ptdesc = (struct ptdesc *)_table; - - pagetable_dtor(ptdesc); - pagetable_free(ptdesc); -} - #include static inline void diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index 93591a80b5bfb..8d762607285cc 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -10,13 +10,6 @@ #include -static inline void __tlb_remove_table(void *_table) -{ - struct ptdesc *ptdesc = (struct ptdesc *)_table; - - pagetable_dtor(ptdesc); - pagetable_free(ptdesc); -} #define tlb_flush tlb_flush static void tlb_flush(struct mmu_gather *tlb); diff --git a/arch/powerpc/include/asm/tlb.h b/arch/powerpc/include/asm/tlb.h index 1ca7d4c4b90db..2058e8d3e0138 100644 --- a/arch/powerpc/include/asm/tlb.h +++ b/arch/powerpc/include/asm/tlb.h @@ -37,6 +37,7 @@ extern void tlb_flush(struct mmu_gather *tlb); */ #define tlb_needs_table_invalidate() radix_enabled() +#define __HAVE_ARCH_TLB_REMOVE_TABLE /* Get the generic bits... */ #include diff --git a/arch/riscv/include/asm/tlb.h b/arch/riscv/include/asm/tlb.h index ded8724b3c4f7..50b63b5c15bd8 100644 --- a/arch/riscv/include/asm/tlb.h +++ b/arch/riscv/include/asm/tlb.h @@ -10,18 +10,6 @@ struct mmu_gather; static void tlb_flush(struct mmu_gather *tlb); -#ifdef CONFIG_MMU - -static inline void __tlb_remove_table(void *table) -{ - struct ptdesc *ptdesc = (struct ptdesc *)table; - - pagetable_dtor(ptdesc); - pagetable_free(ptdesc); -} - -#endif /* CONFIG_MMU */ - #define tlb_flush tlb_flush #include diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h index 79df7c0932c56..da4a7d175f69c 100644 --- a/arch/s390/include/asm/tlb.h +++ b/arch/s390/include/asm/tlb.h @@ -22,7 +22,6 @@ * Pages used for the page tables is a different story. FIXME: more */ -void __tlb_remove_table(void *_table); static inline void tlb_flush(struct mmu_gather *tlb); static inline bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, bool delay_rmap, int page_size); @@ -87,7 +86,7 @@ static inline void pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, tlb->cleared_pmds = 1; if (mm_alloc_pgste(tlb->mm)) gmap_unlink(tlb->mm, (unsigned long *)pte, address); - tlb_remove_ptdesc(tlb, pte); + tlb_remove_ptdesc(tlb, virt_to_ptdesc(pte)); } /* @@ -106,7 +105,7 @@ static inline void pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd, tlb->mm->context.flush_mm = 1; tlb->freed_tables = 1; tlb->cleared_puds = 1; - tlb_remove_ptdesc(tlb, pmd); + tlb_remove_ptdesc(tlb, virt_to_ptdesc(pmd)); } /* @@ -124,7 +123,7 @@ static inline void pud_free_tlb(struct mmu_gather *tlb, pud_t *pud, tlb->mm->context.flush_mm = 1; tlb->freed_tables = 1; tlb->cleared_p4ds = 1; - tlb_remove_ptdesc(tlb, pud); + tlb_remove_ptdesc(tlb, virt_to_ptdesc(pud)); } /* @@ -142,7 +141,7 @@ static inline void p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d, __tlb_adjust_range(tlb, address, PAGE_SIZE); tlb->mm->context.flush_mm = 1; tlb->freed_tables = 1; - tlb_remove_ptdesc(tlb, p4d); + tlb_remove_ptdesc(tlb, virt_to_ptdesc(p4d)); } #endif /* _S390_TLB_H */ diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c index c73b89811a264..3e002dea6278f 100644 --- a/arch/s390/mm/pgalloc.c +++ b/arch/s390/mm/pgalloc.c @@ -193,13 +193,6 @@ void page_table_free(struct mm_struct *mm, unsigned long *table) pagetable_dtor_free(ptdesc); } -void __tlb_remove_table(void *table) -{ - struct ptdesc *ptdesc = virt_to_ptdesc(table); - - pagetable_dtor_free(ptdesc); -} - #ifdef CONFIG_TRANSPARENT_HUGEPAGE static void pte_free_now(struct rcu_head *head) { diff --git a/arch/sparc/include/asm/tlb_32.h b/arch/sparc/include/asm/tlb_32.h index 5cd28a8793e39..910254867dfbd 100644 --- a/arch/sparc/include/asm/tlb_32.h +++ b/arch/sparc/include/asm/tlb_32.h @@ -2,6 +2,7 @@ #ifndef _SPARC_TLB_H #define _SPARC_TLB_H +#define __HAVE_ARCH_TLB_REMOVE_TABLE #include #endif /* _SPARC_TLB_H */ diff --git a/arch/sparc/include/asm/tlb_64.h b/arch/sparc/include/asm/tlb_64.h index 3037187482db7..1a6e694418e39 100644 --- a/arch/sparc/include/asm/tlb_64.h +++ b/arch/sparc/include/asm/tlb_64.h @@ -33,6 +33,7 @@ void flush_tlb_pending(void); #define tlb_needs_table_invalidate() (false) #endif +#define __HAVE_ARCH_TLB_REMOVE_TABLE #include #endif /* _SPARC64_TLB_H */ diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h index f64730be5ad67..3858dbf75880e 100644 --- a/arch/x86/include/asm/tlb.h +++ b/arch/x86/include/asm/tlb.h @@ -20,23 +20,6 @@ static inline void tlb_flush(struct mmu_gather *tlb) flush_tlb_mm_range(tlb->mm, start, end, stride_shift, tlb->freed_tables); } -/* - * While x86 architecture in general requires an IPI to perform TLB - * shootdown, enablement code for several hypervisors overrides - * .flush_tlb_others hook in pv_mmu_ops and implements it by issuing - * a hypercall. To keep software pagetable walkers safe in this case we - * switch to RCU based table free (MMU_GATHER_RCU_TABLE_FREE). See the comment - * below 'ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE' in include/asm-generic/tlb.h - * for more details. - */ -static inline void __tlb_remove_table(void *table) -{ - struct ptdesc *ptdesc = (struct ptdesc *)table; - - pagetable_dtor(ptdesc); - pagetable_free(ptdesc); -} - #ifdef CONFIG_PT_RECLAIM static inline void __tlb_remove_table_one_rcu(struct rcu_head *head) { diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 709830274b756..69de47c7ef3c5 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -153,8 +153,9 @@ * * Useful if your architecture has non-page page directories. * - * When used, an architecture is expected to provide __tlb_remove_table() - * which does the actual freeing of these pages. + * When used, an architecture is expected to provide __tlb_remove_table() or + * use the generic __tlb_remove_table(), which does the actual freeing of these + * pages. * * MMU_GATHER_RCU_TABLE_FREE * @@ -207,6 +208,16 @@ struct mmu_table_batch { #define MAX_TABLE_BATCH \ ((PAGE_SIZE - sizeof(struct mmu_table_batch)) / sizeof(void *)) +#ifndef __HAVE_ARCH_TLB_REMOVE_TABLE +static inline void __tlb_remove_table(void *table) +{ + struct ptdesc *ptdesc = (struct ptdesc *)table; + + pagetable_dtor(ptdesc); + pagetable_free(ptdesc); +} +#endif + extern void tlb_remove_table(struct mmu_gather *tlb, void *table); #else /* !CONFIG_MMU_GATHER_HAVE_TABLE_FREE */ From patchwork Mon Dec 23 09:41:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13918692 Received: from mail-pf1-f179.google.com (mail-pf1-f179.google.com [209.85.210.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 893781AB6D4 for ; Mon, 23 Dec 2024 09:46:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947165; cv=none; b=KsMDwVUHyerTmEm3xxYuP03eC9b4eP5Q2DLYJNajEcWYjdk00vsrzwG3QRc6B8XRXd0Sywp3w7KIecNPjNPtNC7lDr4D9RuhJeadqOVH8Res6d7JBC4BM1JMPrldFIcNYZiIzHnzykoD4Nomk1BufDQdrL4qByY+hMSfrGHvVm4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947165; c=relaxed/simple; bh=AOHzYz3e0W68bypaywk5PqwWiTbsh7X+wa84FHfFzTo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=tKvS8+5VDp4bc1nGD0qoKwte9KY12CLcwIe0J2BqwBd/25WHkqIw/ND1w4u1SdoUmxNViJlZb0tjYxTVxIHlQx1ozgjtKRwkHqsRw5uw833f1iAKnAvFcCbnI6mK/gYiqwUrJ82Y0LsJOCFBBi9L9NhBMFnaNKMTIBJsQjC1ytw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=jCnmVm/T; arc=none smtp.client-ip=209.85.210.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="jCnmVm/T" Received: by mail-pf1-f179.google.com with SMTP id d2e1a72fcca58-725abf74334so3355329b3a.3 for ; Mon, 23 Dec 2024 01:46:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1734947162; x=1735551962; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VAJKsDdGNb+ALwQvMYs9Oz9Vq8VGrLJsA8GxB+kZmZc=; b=jCnmVm/TIkWfMytO3QSvme+fClSmAs5pduO/mAkMupwhIOpUfFfT0a3ztDvuFTYn5X xT2uM1aGyiOwIR+C9EaLYuyPxt2SR/yGOOBAUlRT65QH2trNelA5QjzYqPSWsJMv/i/d l4xnCtTVB5DF9mzcFgJjM0DMcPXqsEH/UO4gwmjhAOJMRBiue2pkcDHI1Rg/S7WuRQDa ihyfSL5wA9wS6NUaqQOQzoMWsZcVdkfUZeATeboLiBmZk0Po2kUCbCy0SrDbrr/RRcPi +Yq4JF0UJqbtc7vn6uPNiBws5Y7vMX6ew/SjBHUnM/zACZH0GvcMphrcUP3vLj0GTQg4 b5jg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734947162; x=1735551962; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VAJKsDdGNb+ALwQvMYs9Oz9Vq8VGrLJsA8GxB+kZmZc=; b=WYY4LtQvDB/qlNE9OMQfMzw89402WUWsmWTegRRRxvsortxxEmkJ2tEs6fM+frx/Lm dk7K3uPjm9XL0DTJ19Yb8msm9yii8oD2Mw2LMdOwzgtGttdZkR0tE/eVyLUiGENrHEUK Rq40I89mLn8DRGmOJkJZlT2Iky5p16u6g8NB3jzI7dqo4wJerKsbEwE3Ck3rWaBeKQG8 V0g/J0KqF57zw+fjh4pElePYLu4UT2hrWwU2PjjnO2LNiH0eqsJTT/d19nbdSc869Z/s AniR+bQokGfxFKhMqg0P/FtQf8H8atQx3v9X909gSPD7ZEGmRAFasqzIWiSDSg04n2oQ y+ag== X-Forwarded-Encrypted: i=1; AJvYcCVXvW+0+OO2jxt9L0nHNBfkKX7R4PDTqzKGtXcrWwIp1JJkMSxvIaIIewPDURWkccN9iAsUD2iH1A==@vger.kernel.org X-Gm-Message-State: AOJu0Yw7D/slyBI04CnJe8M0C7LM4fTxYpv0rV2PGM3Ea0PT3M+swew1 DCA52zAzkor0hO900uPLzOjkxo5Xok8Q6swDgyxOBuxh5GId/6f2XceETlvHZgc= X-Gm-Gg: ASbGnctYT0WoEib6fCsK5cUL8p5w9jQsClSFUKnvPmIIZTelMUtJJTmFBQ4LL0t8ZDD EvdXRBZhN5a93aF2aPoC1xB1LdcZTEagYz11zJNwaJBsJ9pMW0d1vTAmmx+HJaVdy/vJ507HBGG 0hOUzaUYlB9KNnkwJpgegkhtqA7zBncWUhTJLMq6GBZ9lgiTIZAVqKLv9xkgqi9YOoMsdWAT8Sq OK/SvnBWihMA3FeeGvmkNvArDxMoASd6VvzdzSYeqrVmxmym3nh+Ypk8+p5DmqXfGUDL+r5v5Qc pdXg8TtzmJeYN9wanB6QbA== X-Google-Smtp-Source: AGHT+IEiKQiNoCztX+vEmXJ78+SVJHcoRjQcReSYKDpgexw8hG6RQW+fGx0o5GN+OmUz8JwpFgmgkg== X-Received: by 2002:a05:6a20:c88f:b0:1d5:10d6:92b9 with SMTP id adf61e73a8af0-1e5e07f88demr18787949637.30.1734947161832; Mon, 23 Dec 2024 01:46:01 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72aad8486c6sm7468309b3a.85.2024.12.23.01.45.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Dec 2024 01:46:01 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v3 14/17] mm: pgtable: move __tlb_remove_table_one() in x86 to generic file Date: Mon, 23 Dec 2024 17:41:00 +0800 Message-Id: <286e9777dd266dc610de20120fae453b84d3a868.1734945104.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The __tlb_remove_table_one() in x86 does not contain architecture-specific content, so move it to the generic file. Signed-off-by: Qi Zheng --- arch/x86/include/asm/tlb.h | 19 ------------------- mm/mmu_gather.c | 20 ++++++++++++++++++-- 2 files changed, 18 insertions(+), 21 deletions(-) diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h index 3858dbf75880e..77f52bc1578a7 100644 --- a/arch/x86/include/asm/tlb.h +++ b/arch/x86/include/asm/tlb.h @@ -20,25 +20,6 @@ static inline void tlb_flush(struct mmu_gather *tlb) flush_tlb_mm_range(tlb->mm, start, end, stride_shift, tlb->freed_tables); } -#ifdef CONFIG_PT_RECLAIM -static inline void __tlb_remove_table_one_rcu(struct rcu_head *head) -{ - struct ptdesc *ptdesc; - - ptdesc = container_of(head, struct ptdesc, pt_rcu_head); - __tlb_remove_table(ptdesc); -} - -static inline void __tlb_remove_table_one(void *table) -{ - struct ptdesc *ptdesc; - - ptdesc = table; - call_rcu(&ptdesc->pt_rcu_head, __tlb_remove_table_one_rcu); -} -#define __tlb_remove_table_one __tlb_remove_table_one -#endif /* CONFIG_PT_RECLAIM */ - static inline void invlpg(unsigned long addr) { asm volatile("invlpg (%0)" ::"r" (addr) : "memory"); diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 1e21022bcf339..7aa6f18c500b2 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -311,13 +311,29 @@ static inline void tlb_table_invalidate(struct mmu_gather *tlb) } } -#ifndef __tlb_remove_table_one +#ifdef CONFIG_PT_RECLAIM +static inline void __tlb_remove_table_one_rcu(struct rcu_head *head) +{ + struct ptdesc *ptdesc; + + ptdesc = container_of(head, struct ptdesc, pt_rcu_head); + __tlb_remove_table(ptdesc); +} + +static inline void __tlb_remove_table_one(void *table) +{ + struct ptdesc *ptdesc; + + ptdesc = table; + call_rcu(&ptdesc->pt_rcu_head, __tlb_remove_table_one_rcu); +} +#else static inline void __tlb_remove_table_one(void *table) { tlb_remove_table_sync_one(); __tlb_remove_table(table); } -#endif +#endif /* CONFIG_PT_RECLAIM */ static void tlb_remove_table_one(void *table) { From patchwork Mon Dec 23 09:41:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13918693 Received: from mail-pf1-f180.google.com (mail-pf1-f180.google.com [209.85.210.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 97ED3194C8B for ; Mon, 23 Dec 2024 09:46:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.180 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947177; cv=none; b=nPXHwGVcHOru1MbtT30ua5jFoxsm4I8Te/TxKX4q+JN4UBpEtQ3Er/r7WTJ/7/QW7VlDeovUzVWNS7OXX3k1NrqzJng8/SM6OHNQRcSnNvidZWEB0oft8Vba+5XhLrP1f9GO4JOOIMnSqC7I8b8WYE51FGULh+6KlubzzD7PvOI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947177; c=relaxed/simple; bh=+1lVb/czrpvckRjrC7+7Dr3aDdUG0igbcjvcBWUZaCE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=dVuEgO9FXXDFivxGIkE4bTsg95j+3LZkTTCodI7jGFaC1GbW++b8vVzVw7RPQ9aTaihZIY5NZp/fj3nYgr4owGn5Fs+PeYZKgfvAiTKWv2TFeq8tl/VzRHQVtL/QK/6Q6iEGQTCqliiEt5x1UC1Zatv3hBDE34NkQ6ty3Uo505Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=ctL5Gvt7; arc=none smtp.client-ip=209.85.210.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="ctL5Gvt7" Received: by mail-pf1-f180.google.com with SMTP id d2e1a72fcca58-725ef0397aeso3098074b3a.2 for ; Mon, 23 Dec 2024 01:46:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1734947175; x=1735551975; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FItpERzyh1wDMphf2PSqriZlspFkmpQlPhA2jGTCtDU=; b=ctL5Gvt7c1MAZ+Ws/Ol9tNIhNb07UBi5cSHM9VMaPSwT2TMmsbAQULtWbKbLLEYBtf bK8x7LOSJF5+RYFL01S/X0L6mT/kMPTWj6RI8nYyYWRApUHGRnus06B3tWYZ2YAU/8qa dz8aYBfdct6rEMNe8EZxsFGnEaXYpJAN8DpucdeYBDtF9yNGwx0wLzrDbUS/mVHRTZEY Ik3FzUDC0PqmhfAubos9gNaGSs4rUfKT7gpa8aP0Ub6n90GtMTWt5Q7B10qh/aWqbmTl DrqJpE2mn5xaAolMU5SnkTE28Foi1hi/TCCeEg/r+icTnYkqnNrKqhOol2ZW7vL3bZa1 nbkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734947175; x=1735551975; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FItpERzyh1wDMphf2PSqriZlspFkmpQlPhA2jGTCtDU=; b=tZmM44JPI0qub6cRErODvMJbumlCM09MYSCIz52IVOTYAmjnyOs+ZHvM5sq477mU9r 9wDUriVKUFrXkcHztfrvxxN5+wPg35N46sYb6oNyJKaDGA3wsdDm6gf3lf+Cl1lIcLO6 SvTP0OVqVNxS9afglYSYijQFXgPbuTNpHbda+mfdbF/QfJlKjTp8hbFzhH2L7oEm7zd3 bB1ErmAMpFW9N/TCH3UBa2dhShvPNkErpJs3DM1r00PvFkJxGTJ8GwqTt98HSNTVeoUX MhILNTtHFNCXajVgMp4KlTHDpL1WA/OYZeirR0UWNDuG+EgvshzpETRgQ73cCupaksz4 p9KQ== X-Forwarded-Encrypted: i=1; AJvYcCXW6ZAtH7LSguU8qBzrCWJ5L3teEOYQjJVt4cFxZBYXSwrbCxUQ9ObMsrcDfwP5sqGRErj16BMJ7Q==@vger.kernel.org X-Gm-Message-State: AOJu0YwIIfBar0Is75p3h35mLQYd+k5NvWZLTIFlLkE/BnnYZMtZV65L 5/9LTLzRxvYu2TckMD62iNJnsavMHackIM3WVwP+E00GKkT80ZqMqOacS+VflOE= X-Gm-Gg: ASbGncveqq4FkxN4M6yPA9rlI8ujtn9Lj44JGsX5z87VsQ7O7iz0x9kO7cKmF11S7gD r/J9gaFuFV1xmnrmKnRDkq5kiQlQdeF0DDSi/kaAtDfGJAV99jM206LWSgPUZQLYM//pRZyyPRo vtYODqu66Y2pWEaA8ryaOtbFcFrmbmxcPzI/KO12bujWQ/oUFAgHa4d7qPhaI2Vg5iwYKojSsjf GogRsBVhCsOdZArdmFrPGDGNd/aXjkX3hQSflZNp3h9eEBvHD4kGvYq0XrF7k0xq4RWC/8G/QiE yFHZdGY/J9jFO1SpXSCDjg== X-Google-Smtp-Source: AGHT+IF9RJvbCaq38Te31r1RKGqLS5POHI3EPqaSbNpVA67C9keEJqX0jQ2rnqy+EikMhfv/vLwTVQ== X-Received: by 2002:a05:6a20:12d2:b0:1e0:c0fa:e088 with SMTP id adf61e73a8af0-1e5e059b19cmr20289317637.18.1734947175103; Mon, 23 Dec 2024 01:46:15 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72aad8486c6sm7468309b3a.85.2024.12.23.01.46.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Dec 2024 01:46:14 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v3 15/17] mm: pgtable: remove tlb_remove_page_ptdesc() Date: Mon, 23 Dec 2024 17:41:01 +0800 Message-Id: X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Here we are explicitly dealing with struct page, and the following logic semms strange: tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); tlb_remove_page_ptdesc --> tlb_remove_page(tlb, ptdesc_page(pt)); So remove tlb_remove_page_ptdesc() and make callers call tlb_remove_page() directly. Signed-off-by: Qi Zheng Originally-by: Peter Zijlstra (Intel) --- arch/csky/include/asm/pgalloc.h | 2 +- arch/hexagon/include/asm/pgalloc.h | 2 +- arch/loongarch/include/asm/pgalloc.h | 2 +- arch/m68k/include/asm/sun3_pgalloc.h | 2 +- arch/mips/include/asm/pgalloc.h | 2 +- arch/nios2/include/asm/pgalloc.h | 2 +- arch/openrisc/include/asm/pgalloc.h | 2 +- arch/riscv/include/asm/pgalloc.h | 2 +- arch/sh/include/asm/pgalloc.h | 2 +- arch/um/include/asm/pgalloc.h | 8 ++++---- include/asm-generic/tlb.h | 6 ------ 11 files changed, 13 insertions(+), 19 deletions(-) diff --git a/arch/csky/include/asm/pgalloc.h b/arch/csky/include/asm/pgalloc.h index f1ce5b7b28f22..936a43a49e704 100644 --- a/arch/csky/include/asm/pgalloc.h +++ b/arch/csky/include/asm/pgalloc.h @@ -64,7 +64,7 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm) #define __pte_free_tlb(tlb, pte, address) \ do { \ pagetable_dtor(page_ptdesc(pte)); \ - tlb_remove_page_ptdesc(tlb, page_ptdesc(pte)); \ + tlb_remove_page(tlb, (pte)); \ } while (0) extern void pagetable_init(void); diff --git a/arch/hexagon/include/asm/pgalloc.h b/arch/hexagon/include/asm/pgalloc.h index 40e42a0e71673..8b1550498f1bf 100644 --- a/arch/hexagon/include/asm/pgalloc.h +++ b/arch/hexagon/include/asm/pgalloc.h @@ -90,7 +90,7 @@ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, #define __pte_free_tlb(tlb, pte, addr) \ do { \ pagetable_dtor((page_ptdesc(pte))); \ - tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ + tlb_remove_page((tlb), (pte)); \ } while (0) #endif diff --git a/arch/loongarch/include/asm/pgalloc.h b/arch/loongarch/include/asm/pgalloc.h index 7211dff8c969e..5a4f22aeb6189 100644 --- a/arch/loongarch/include/asm/pgalloc.h +++ b/arch/loongarch/include/asm/pgalloc.h @@ -58,7 +58,7 @@ static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm) #define __pte_free_tlb(tlb, pte, address) \ do { \ pagetable_dtor(page_ptdesc(pte)); \ - tlb_remove_page_ptdesc((tlb), page_ptdesc(pte)); \ + tlb_remove_page((tlb), (pte)); \ } while (0) #ifndef __PAGETABLE_PMD_FOLDED diff --git a/arch/m68k/include/asm/sun3_pgalloc.h b/arch/m68k/include/asm/sun3_pgalloc.h index 2b626cb3ad0ae..63d9f95f5e3dd 100644 --- a/arch/m68k/include/asm/sun3_pgalloc.h +++ b/arch/m68k/include/asm/sun3_pgalloc.h @@ -20,7 +20,7 @@ extern const char bad_pmd_string[]; #define __pte_free_tlb(tlb, pte, addr) \ do { \ pagetable_dtor(page_ptdesc(pte)); \ - tlb_remove_page_ptdesc((tlb), page_ptdesc(pte)); \ + tlb_remove_page((tlb), (pte)); \ } while (0) static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t *pte) diff --git a/arch/mips/include/asm/pgalloc.h b/arch/mips/include/asm/pgalloc.h index 36d9805033c4b..bbee21345154b 100644 --- a/arch/mips/include/asm/pgalloc.h +++ b/arch/mips/include/asm/pgalloc.h @@ -57,7 +57,7 @@ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) #define __pte_free_tlb(tlb, pte, address) \ do { \ pagetable_dtor(page_ptdesc(pte)); \ - tlb_remove_page_ptdesc((tlb), page_ptdesc(pte)); \ + tlb_remove_page((tlb), (pte)); \ } while (0) #ifndef __PAGETABLE_PMD_FOLDED diff --git a/arch/nios2/include/asm/pgalloc.h b/arch/nios2/include/asm/pgalloc.h index 12a536b7bfbd4..641cec8fb2a22 100644 --- a/arch/nios2/include/asm/pgalloc.h +++ b/arch/nios2/include/asm/pgalloc.h @@ -31,7 +31,7 @@ extern pgd_t *pgd_alloc(struct mm_struct *mm); #define __pte_free_tlb(tlb, pte, addr) \ do { \ pagetable_dtor(page_ptdesc(pte)); \ - tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ + tlb_remove_page((tlb), (pte)); \ } while (0) #endif /* _ASM_NIOS2_PGALLOC_H */ diff --git a/arch/openrisc/include/asm/pgalloc.h b/arch/openrisc/include/asm/pgalloc.h index 596e2355824e3..e9b9bc53ece0b 100644 --- a/arch/openrisc/include/asm/pgalloc.h +++ b/arch/openrisc/include/asm/pgalloc.h @@ -69,7 +69,7 @@ extern pte_t *pte_alloc_one_kernel(struct mm_struct *mm); #define __pte_free_tlb(tlb, pte, addr) \ do { \ pagetable_dtor(page_ptdesc(pte)); \ - tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ + tlb_remove_page((tlb), (pte)); \ } while (0) #endif diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h index c8907b8317115..ab4f9b2cf9e11 100644 --- a/arch/riscv/include/asm/pgalloc.h +++ b/arch/riscv/include/asm/pgalloc.h @@ -29,7 +29,7 @@ static inline void riscv_tlb_remove_ptdesc(struct mmu_gather *tlb, void *pt) tlb_remove_ptdesc(tlb, pt); } else { pagetable_dtor(pt); - tlb_remove_page_ptdesc(tlb, pt); + tlb_remove_page(tlb, ptdesc_page((struct ptdesc *)pt)); } } diff --git a/arch/sh/include/asm/pgalloc.h b/arch/sh/include/asm/pgalloc.h index 96d938fdf2244..43812b2363efd 100644 --- a/arch/sh/include/asm/pgalloc.h +++ b/arch/sh/include/asm/pgalloc.h @@ -35,7 +35,7 @@ static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, #define __pte_free_tlb(tlb, pte, addr) \ do { \ pagetable_dtor(page_ptdesc(pte)); \ - tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ + tlb_remove_page((tlb), (pte)); \ } while (0) #endif /* __ASM_SH_PGALLOC_H */ diff --git a/arch/um/include/asm/pgalloc.h b/arch/um/include/asm/pgalloc.h index f0af23c3aeb2b..98190c318a8e9 100644 --- a/arch/um/include/asm/pgalloc.h +++ b/arch/um/include/asm/pgalloc.h @@ -28,7 +28,7 @@ extern pgd_t *pgd_alloc(struct mm_struct *); #define __pte_free_tlb(tlb, pte, address) \ do { \ pagetable_dtor(page_ptdesc(pte)); \ - tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ + tlb_remove_page((tlb), (pte)); \ } while (0) #if CONFIG_PGTABLE_LEVELS > 2 @@ -36,15 +36,15 @@ do { \ #define __pmd_free_tlb(tlb, pmd, address) \ do { \ pagetable_dtor(virt_to_ptdesc(pmd)); \ - tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pmd)); \ + tlb_remove_page((tlb), virt_to_page(pmd)); \ } while (0) #if CONFIG_PGTABLE_LEVELS > 3 #define __pud_free_tlb(tlb, pud, address) \ do { \ - pagetable_dtor(virt_to_ptdesc(pud)); \ - tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pud)); \ + pagetable_dtor(virt_to_ptdesc(pud)); \ + tlb_remove_page((tlb), virt_to_page(pud)); \ } while (0) #endif diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 69de47c7ef3c5..8d6cfe5058543 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -504,12 +504,6 @@ static inline void tlb_remove_ptdesc(struct mmu_gather *tlb, void *pt) tlb_remove_table(tlb, pt); } -/* Like tlb_remove_ptdesc, but for page-like page directories. */ -static inline void tlb_remove_page_ptdesc(struct mmu_gather *tlb, struct ptdesc *pt) -{ - tlb_remove_page(tlb, ptdesc_page(pt)); -} - static inline void tlb_change_page_size(struct mmu_gather *tlb, unsigned int page_size) { From patchwork Mon Dec 23 09:41:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13918694 Received: from mail-pf1-f178.google.com (mail-pf1-f178.google.com [209.85.210.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 514221991C1 for ; Mon, 23 Dec 2024 09:46:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947190; cv=none; b=gTMGuzBG/ROX5Q8AeKoGqLgSOvG0CAerGUg1XVo+9MGK+/9po9FeDm55oRisCDyCuEv84ugpk/Qg7OF3d8EAXjA7r6U39Q2dW7WgOmpoKOM4kcIAyBXw6fzwk8PYF6Odg3/aar53BKCjMJEkMt+QyFPa8FAEkGKYak/5usqhvhA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947190; c=relaxed/simple; bh=wsOfsxqAjypDbhdyoWQgn2ScMkT8kExqTuyzqqTKK3A=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=cQO9PvNLCdilcjRndQAppTmq13AItqhLZAbnN10zHjdvaXJ4MaKuJiBUJnWP4LDrLJ4XzqvSqIgRv8Nb2bByQDQ09yWIQtGjrlhT0F5e2Yqm998B3XuUsJSH1PnyMUTxNtEAKrtMr+gCCUUNrGFH/z2u/ayheWxvMMiJ+O6j4q8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=b8fFLJ6/; arc=none smtp.client-ip=209.85.210.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="b8fFLJ6/" Received: by mail-pf1-f178.google.com with SMTP id d2e1a72fcca58-728e81257bfso3185445b3a.2 for ; Mon, 23 Dec 2024 01:46:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1734947188; x=1735551988; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=6vzAxma0H0oGKVF+0yjfVv8ni44DJQgUTcLWSpNFGuY=; b=b8fFLJ6/vWjrbKOwI7uUO7GpigqRpuSTwiQGZX8sQI5yybLCXbhSHDZv+7iwSj7xzq h+RH+QICxgGbGu3szgOfmUfCFcXfU2LRJX5WitR+HNKHhzJtXo+5DzhMZn/lZjtGuMe0 Uxz6JkCGGU0QSqbsv6hoM6LS80gq7RkwbWegpeK041zcvZUiaJdJCuklYve7MuUgAINv fXwydI7wMJdUDVf2iz8dF9LnGYI7UtqmaBNkUSR3VYUf7GRQbaa8qwiN7ahvWslIjcku UCzlRk9bWJA1jrDhBr1tO4SfCgXHxAU8X7GxLfPlRitP5KCqa8ZNI7a1W3qlQRILXQCS EuiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734947188; x=1735551988; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6vzAxma0H0oGKVF+0yjfVv8ni44DJQgUTcLWSpNFGuY=; b=j1c4MBRB6eMOr04hmgzTmi1niH/cOX3EJ71iZrbpHtIRbBN5VMXbcYK1dSk2O+pgF3 WkoEWYpWG8bi+vf+cKgzqpDu1pg/9jPfxAHzsPPAKP96A8jeThjsKjXv9H5x1Q3VOpcx FEMYi/QlezRGd4n+NdhReDA8HmLVlc3Cij9XylLRJW9LMQZlBku8scB0FSiC80HpfItF yW0xf/M7fPip45q81xJgqahsj1ytzdNcAdAjG73z9msN9PPsQ5LR3SgTWwz5/AHBtUW5 3nzF3Vq5svC6Q517rD2NEqWRii02/CdrBLRWhibVyNdfR/nEK/6KL0O9DTkXUgueloHD 5KEA== X-Forwarded-Encrypted: i=1; AJvYcCWY1YiszvFWJI0KcPMneWdsgME83mzE4f0n+ILrcYcNNpZBwfHAQoWj66EaLtAv+l8m5+SwCkDl0w==@vger.kernel.org X-Gm-Message-State: AOJu0YzYToDYbDHIXv7reub7OkKOSng5G4U0IWBMk2dHIkkGFYdvBrjE NF7wslrvdOeK1QXpTIdjah6ItlB69WgS6sYdibrG09teEVkdo6KXVN7iqzfYu6E= X-Gm-Gg: ASbGncu80ftLJ5p3vxqp6mu0dhQM/XPHcP4diWrJQvfUzbRzkOiYEVaezFXl7Cv8iwC kCgV0TaDBRO8dPNkOPAyLmZErxQLdxl2ZJQuP5qa3/RhFd05hFcmDN+DKyRWxY+dORhzj1ofZHs /QE6isSmyWbOYzbc+Tpvi0RBAKCv2MZq8gnyfDPGxzHR6PzwamivYsArEK4QvSeYC8md6W1zy2r 89S5E5t70GfqIucn37GemhL5Z1eBLTlowjQqcJd12VugLxOA2rVIR4iVlmZ6tQ6UUUqc5Nmy26l AU6gCjdtolc+5pPOQjgnGw== X-Google-Smtp-Source: AGHT+IGqxRLXXsgzbTBUt4P8wMbb/CAKHFN0NH0iCzAzNT/ELUdunTp3cz3Ieuj7L8l/gUjxiL3RMw== X-Received: by 2002:aa7:888c:0:b0:72a:a9b5:ed91 with SMTP id d2e1a72fcca58-72abde0e6b7mr17804649b3a.13.1734947187839; Mon, 23 Dec 2024 01:46:27 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72aad8486c6sm7468309b3a.85.2024.12.23.01.46.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Dec 2024 01:46:27 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v3 16/17] mm: pgtable: remove tlb_remove_ptdesc() Date: Mon, 23 Dec 2024 17:41:02 +0800 Message-Id: <93cce93bf8be04f3a5cd828cc0a48750fb90af44.1734945104.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Just like removing tlb_remove_page_ptdesc(), remove tlb_remove_ptdesc() as well, and make callers call tlb_remove_table() directly. Signed-off-by: Qi Zheng Originally-by: Peter Zijlstra (Intel) --- arch/arm/include/asm/tlb.h | 8 ++------ arch/arm64/include/asm/tlb.h | 16 ++++------------ arch/riscv/include/asm/pgalloc.h | 14 +++++++------- arch/s390/include/asm/tlb.h | 8 ++++---- include/asm-generic/tlb.h | 7 +------ mm/mmu_gather.c | 11 +++++------ 6 files changed, 23 insertions(+), 41 deletions(-) diff --git a/arch/arm/include/asm/tlb.h b/arch/arm/include/asm/tlb.h index ea4fbe7b17f6f..ac3881ec342f1 100644 --- a/arch/arm/include/asm/tlb.h +++ b/arch/arm/include/asm/tlb.h @@ -32,8 +32,6 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr) { - struct ptdesc *ptdesc = page_ptdesc(pte); - #ifndef CONFIG_ARM_LPAE /* * With the classic ARM MMU, a pte page has two corresponding pmd @@ -43,16 +41,14 @@ __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr) __tlb_adjust_range(tlb, addr - PAGE_SIZE, 2 * PAGE_SIZE); #endif - tlb_remove_ptdesc(tlb, ptdesc); + tlb_remove_table(tlb, pte); } static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr) { #ifdef CONFIG_ARM_LPAE - struct ptdesc *ptdesc = virt_to_ptdesc(pmdp); - - tlb_remove_ptdesc(tlb, ptdesc); + tlb_remove_table(tlb, virt_to_page(pmdp)); #endif } diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index 8d762607285cc..4a60569fed696 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -75,18 +75,14 @@ static inline void tlb_flush(struct mmu_gather *tlb) static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr) { - struct ptdesc *ptdesc = page_ptdesc(pte); - - tlb_remove_ptdesc(tlb, ptdesc); + tlb_remove_table(tlb, pte); } #if CONFIG_PGTABLE_LEVELS > 2 static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr) { - struct ptdesc *ptdesc = virt_to_ptdesc(pmdp); - - tlb_remove_ptdesc(tlb, ptdesc); + tlb_remove_table(tlb, virt_to_page(pmdp)); } #endif @@ -94,12 +90,10 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pudp, unsigned long addr) { - struct ptdesc *ptdesc = virt_to_ptdesc(pudp); - if (!pgtable_l4_enabled()) return; - tlb_remove_ptdesc(tlb, ptdesc); + tlb_remove_table(tlb, virt_to_page(pudp)); } #endif @@ -107,12 +101,10 @@ static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pudp, static inline void __p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4dp, unsigned long addr) { - struct ptdesc *ptdesc = virt_to_ptdesc(p4dp); - if (!pgtable_l5_enabled()) return; - tlb_remove_ptdesc(tlb, ptdesc); + tlb_remove_table(tlb, virt_to_page(p4dp)); } #endif diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h index ab4f9b2cf9e11..25c2e2f262810 100644 --- a/arch/riscv/include/asm/pgalloc.h +++ b/arch/riscv/include/asm/pgalloc.h @@ -26,10 +26,10 @@ static inline void riscv_tlb_remove_ptdesc(struct mmu_gather *tlb, void *pt) { if (riscv_use_sbi_for_rfence()) { - tlb_remove_ptdesc(tlb, pt); + tlb_remove_table(tlb, pt); } else { - pagetable_dtor(pt); - tlb_remove_page(tlb, ptdesc_page((struct ptdesc *)pt)); + pagetable_dtor(page_ptdesc((struct page *)pt)); + tlb_remove_page(tlb, pt); } } @@ -108,14 +108,14 @@ static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pud, unsigned long addr) { if (pgtable_l4_enabled) - riscv_tlb_remove_ptdesc(tlb, virt_to_ptdesc(pud)); + riscv_tlb_remove_ptdesc(tlb, virt_to_page(pud)); } static inline void __p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d, unsigned long addr) { if (pgtable_l5_enabled) - riscv_tlb_remove_ptdesc(tlb, virt_to_ptdesc(p4d)); + riscv_tlb_remove_ptdesc(tlb, virt_to_page(p4d)); } #endif /* __PAGETABLE_PMD_FOLDED */ @@ -144,7 +144,7 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm) static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd, unsigned long addr) { - riscv_tlb_remove_ptdesc(tlb, virt_to_ptdesc(pmd)); + riscv_tlb_remove_ptdesc(tlb, virt_to_page(pmd)); } #endif /* __PAGETABLE_PMD_FOLDED */ @@ -152,7 +152,7 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd, static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr) { - riscv_tlb_remove_ptdesc(tlb, page_ptdesc(pte)); + riscv_tlb_remove_ptdesc(tlb, pte); } #endif /* CONFIG_MMU */ diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h index da4a7d175f69c..5eed6300f3d72 100644 --- a/arch/s390/include/asm/tlb.h +++ b/arch/s390/include/asm/tlb.h @@ -86,7 +86,7 @@ static inline void pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, tlb->cleared_pmds = 1; if (mm_alloc_pgste(tlb->mm)) gmap_unlink(tlb->mm, (unsigned long *)pte, address); - tlb_remove_ptdesc(tlb, virt_to_ptdesc(pte)); + tlb_remove_table(tlb, virt_to_page(pte)); } /* @@ -105,7 +105,7 @@ static inline void pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd, tlb->mm->context.flush_mm = 1; tlb->freed_tables = 1; tlb->cleared_puds = 1; - tlb_remove_ptdesc(tlb, virt_to_ptdesc(pmd)); + tlb_remove_table(tlb, virt_to_page(pmd)); } /* @@ -123,7 +123,7 @@ static inline void pud_free_tlb(struct mmu_gather *tlb, pud_t *pud, tlb->mm->context.flush_mm = 1; tlb->freed_tables = 1; tlb->cleared_p4ds = 1; - tlb_remove_ptdesc(tlb, virt_to_ptdesc(pud)); + tlb_remove_table(tlb, virt_to_page(pud)); } /* @@ -141,7 +141,7 @@ static inline void p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d, __tlb_adjust_range(tlb, address, PAGE_SIZE); tlb->mm->context.flush_mm = 1; tlb->freed_tables = 1; - tlb_remove_ptdesc(tlb, virt_to_ptdesc(p4d)); + tlb_remove_table(tlb, virt_to_page(p4d)); } #endif /* _S390_TLB_H */ diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 8d6cfe5058543..583e95568f52b 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -211,7 +211,7 @@ struct mmu_table_batch { #ifndef __HAVE_ARCH_TLB_REMOVE_TABLE static inline void __tlb_remove_table(void *table) { - struct ptdesc *ptdesc = (struct ptdesc *)table; + struct ptdesc *ptdesc = page_ptdesc((struct page *)table); pagetable_dtor(ptdesc); pagetable_free(ptdesc); @@ -499,11 +499,6 @@ static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page) return tlb_remove_page_size(tlb, page, PAGE_SIZE); } -static inline void tlb_remove_ptdesc(struct mmu_gather *tlb, void *pt) -{ - tlb_remove_table(tlb, pt); -} - static inline void tlb_change_page_size(struct mmu_gather *tlb, unsigned int page_size) { diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 7aa6f18c500b2..c58ce4539c56f 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -314,18 +314,17 @@ static inline void tlb_table_invalidate(struct mmu_gather *tlb) #ifdef CONFIG_PT_RECLAIM static inline void __tlb_remove_table_one_rcu(struct rcu_head *head) { - struct ptdesc *ptdesc; + struct page *page; - ptdesc = container_of(head, struct ptdesc, pt_rcu_head); - __tlb_remove_table(ptdesc); + page = container_of(head, struct page, rcu_head); + __tlb_remove_table(page); } static inline void __tlb_remove_table_one(void *table) { - struct ptdesc *ptdesc; + struct page *page = (struct page *)table; - ptdesc = table; - call_rcu(&ptdesc->pt_rcu_head, __tlb_remove_table_one_rcu); + call_rcu(&page->rcu_head, __tlb_remove_table_one_rcu); } #else static inline void __tlb_remove_table_one(void *table) From patchwork Mon Dec 23 09:41:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13918695 Received: from mail-pf1-f181.google.com (mail-pf1-f181.google.com [209.85.210.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 28B6A1A4F21 for ; Mon, 23 Dec 2024 09:46:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947203; cv=none; b=eOPHSLYA78cavThWmgEQTqp8RiOjSgovvusU4g1CQplwn8Tpvl4CI3hMA/HSSuc6kld/FbXJf9DfZh+547WkcWfF5WraEyhx8adJM1r5uNrXg1crfgt8EmKgHoQh5unCavCtl9n8TU/Mc9Mne3chcbxT1/eZkRK5AGzGcrsbFVw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734947203; c=relaxed/simple; bh=z7CWl7O3eullh1F2ul6vkw9nxR1R7juWYomvqNSqC4Q=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=IEBewRxMutOB4mOCnjTGmLsHIdwNcSTZr2yGhYX5w7Ze4y/SJJW5LlshzZxveWVrnSVu7BtAxNJm8ecBKuo+Xjx+WMtbNPFA4xBQl9kJXdovv1hkCyFazQxLZ9jkNUay+Py9ZWtKx4fBHjVW25F5ktaTTEYY/+YZuOnMHNUeahY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=gBJl0ZEh; arc=none smtp.client-ip=209.85.210.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="gBJl0ZEh" Received: by mail-pf1-f181.google.com with SMTP id d2e1a72fcca58-725dac69699so3489295b3a.0 for ; Mon, 23 Dec 2024 01:46:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1734947200; x=1735552000; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/Vy7m3PQdTp9s3cHn2TFUEZi29ZbV6wwwxQw/bfJltk=; b=gBJl0ZEhPzt/YSVHFKr2G4DdLNfHEeophzf6xmt5hCsiabjCqcPJbB3Q9Y/7nmsme7 poM4DwoLIFZWGSCWZJTwjTM8ndd0C8Yw18HkASeG7zFK+VVeS2Nf69oEJRfu5s/AJkYj aQkDbDagyRe8U46EnyVq2TwDG+CPWkCmxCZFQrkRpzBtV7aKmFBumvlZif5WOL3WRKZT jABbdnWKZoXLBvRJeaeVUqY1vOLNfyTmAau39IhKdbinpDl7svx0FvYBFps/t4XqkLvo nA4zbKRN6ZAB3c7BaDnFnRiqDpJjSV3Pzx1TGH2u/AiK68Hf+z0CDLrcsFWE0lJn3G+t MEaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734947200; x=1735552000; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/Vy7m3PQdTp9s3cHn2TFUEZi29ZbV6wwwxQw/bfJltk=; b=phJJFM/JqkTOVBAPyGnA9+ZR/KeTDuu533lqFUYvj3NHj3BBy1HJvIPSUJdWMYpbl6 PxSgWmgCptR6Oq9M1F7oQc6J/Gl79dXkkT/qqvF+YbSkWzjrByTXRUho2LTfQoGOtUkO R889dP6w5GPtX/3yg2rdD0UFxIIkbMgzAm1A6xW2uX9QbKQxvkGqRgiqGqpQM+QuYQfl sidPUhLsNeutkDOqXobzOeuxk9238BV+ygX73b3BnpTFxqx0BY8ZcwgnjYvBYSwLTawR z73DASisJp92zLXE0icFD6GFyRnYku4hIeIPIU5YEVpgraZvqmxfbB5NhdwCd36G1kj/ r0Bg== X-Forwarded-Encrypted: i=1; AJvYcCW0jviNTfL/fCUgOjaOJPdfxF/1IMa5w44K/QV4IMLOpznNsPD9gQJx1+5uELDCqEVipu+cfgnD2A==@vger.kernel.org X-Gm-Message-State: AOJu0Yx6t4n21g8oWEo0Br6jbcy2JS4U6yBlHutW0+bLm5dbEuoShDJC FLjAs6ghoKpqZxatv0rWF+FwIwkIpkKQZ/H4Lo+7eFfnKOmxYUe9hnrz2pF8+PQ= X-Gm-Gg: ASbGncvDt0jBkaW6X52LJJROSNbttrBTCZhULn8U5m20/c927q/fgF944i6VqAgltiU kfWAb/E4TwNM6K2qD4XwZgPnn0gmaRxQ2MC7/ddwmD8g8L2r4LCC+TZ5dSALTJ3oDaVYpVM62Mm ppa2zCMeJbcuHV5mfE0xRPPN+ByXg1T88lvwPFWIB1tKIgKHv3T1Sjtbj58rIGsv5iatNb99a2H Vk7vQN4NFW11MI+j+5As6DxWTMnL/YetcctsKN5IDYaKmwX1AHLgzds3cP5qT9CtlCDBcj1950l XNmFMjdfKeKqiX/36mjXrw== X-Google-Smtp-Source: AGHT+IHTlP+hesMysY/dnyw4k1T/aSGpJ0FSxzWG+ektln9VxnC2FaHlf9zeRwOgabkwOIXU2HSkPg== X-Received: by 2002:a05:6a00:2407:b0:729:597:4faa with SMTP id d2e1a72fcca58-72abde9c004mr15027146b3a.16.1734947200520; Mon, 23 Dec 2024 01:46:40 -0800 (PST) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.150]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-72aad8486c6sm7468309b3a.85.2024.12.23.01.46.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Dec 2024 01:46:40 -0800 (PST) From: Qi Zheng To: peterz@infradead.org, agordeev@linux.ibm.com, kevin.brodsky@arm.com, tglx@linutronix.de, david@redhat.com, jannh@google.com, hughd@google.com, yuzhao@google.com, willy@infradead.org, muchun.song@linux.dev, vbabka@kernel.org, lorenzo.stoakes@oracle.com, akpm@linux-foundation.org, rientjes@google.com, vishal.moola@gmail.com, arnd@arndb.de, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, dave.hansen@linux.intel.com, rppt@kernel.org, ryan.roberts@arm.com Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-arch@vger.kernel.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-um@lists.infradead.org, Qi Zheng Subject: [PATCH v3 17/17] mm: pgtable: introduce generic pagetable_dtor_free() Date: Mon, 23 Dec 2024 17:41:03 +0800 Message-Id: <3ade33c5049f465dc2f0b95edc2d68c80f2048c9.1734945104.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-sh@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The pte_free(), pmd_free(), __pud_free() and __p4d_free() in asm-generic/pgalloc.h and the generic __tlb_remove_table() are basically the same, so let's introduce pagetable_dtor_free() to deduplicate them. In addition, the pagetable_dtor_free() in s390 actually does the same thing, so let's s390 also calls generic pagetable_dtor_free(). Signed-off-by: Qi Zheng Suggested-by: Peter Zijlstra (Intel) --- arch/s390/mm/pgalloc.c | 18 ++++++------------ include/asm-generic/pgalloc.h | 23 ++++------------------- include/asm-generic/tlb.h | 5 +---- include/linux/mm.h | 8 ++++++++ 4 files changed, 19 insertions(+), 35 deletions(-) diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c index 3e002dea6278f..1e0727be48eaf 100644 --- a/arch/s390/mm/pgalloc.c +++ b/arch/s390/mm/pgalloc.c @@ -180,32 +180,26 @@ unsigned long *page_table_alloc(struct mm_struct *mm) return table; } -static void pagetable_dtor_free(struct ptdesc *ptdesc) -{ - pagetable_dtor(ptdesc); - pagetable_free(ptdesc); -} - void page_table_free(struct mm_struct *mm, unsigned long *table) { - struct ptdesc *ptdesc = virt_to_ptdesc(table); + struct page *page = virt_to_page(table); - pagetable_dtor_free(ptdesc); + pagetable_dtor_free(page); } #ifdef CONFIG_TRANSPARENT_HUGEPAGE static void pte_free_now(struct rcu_head *head) { - struct ptdesc *ptdesc = container_of(head, struct ptdesc, pt_rcu_head); + struct page *page = container_of(head, struct page, rcu_head); - pagetable_dtor_free(ptdesc); + pagetable_dtor_free(page); } void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable) { - struct ptdesc *ptdesc = virt_to_ptdesc(pgtable); + struct page *page = virt_to_page(pgtable); - call_rcu(&ptdesc->pt_rcu_head, pte_free_now); + call_rcu(&page->rcu_head, pte_free_now); /* * THPs are not allowed for KVM guests. Warn if pgste ever reaches here. * Turn to the generic pte_free_defer() version once gmap is removed. diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h index 4afb346eae255..7d327889df306 100644 --- a/include/asm-generic/pgalloc.h +++ b/include/asm-generic/pgalloc.h @@ -107,10 +107,7 @@ static inline pgtable_t pte_alloc_one_noprof(struct mm_struct *mm) */ static inline void pte_free(struct mm_struct *mm, struct page *pte_page) { - struct ptdesc *ptdesc = page_ptdesc(pte_page); - - pagetable_dtor(ptdesc); - pagetable_free(ptdesc); + pagetable_dtor_free(pte_page); } @@ -150,11 +147,7 @@ static inline pmd_t *pmd_alloc_one_noprof(struct mm_struct *mm, unsigned long ad #ifndef __HAVE_ARCH_PMD_FREE static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd) { - struct ptdesc *ptdesc = virt_to_ptdesc(pmd); - - BUG_ON((unsigned long)pmd & (PAGE_SIZE-1)); - pagetable_dtor(ptdesc); - pagetable_free(ptdesc); + pagetable_dtor_free(virt_to_page(pmd)); } #endif @@ -199,11 +192,7 @@ static inline pud_t *pud_alloc_one_noprof(struct mm_struct *mm, unsigned long ad static inline void __pud_free(struct mm_struct *mm, pud_t *pud) { - struct ptdesc *ptdesc = virt_to_ptdesc(pud); - - BUG_ON((unsigned long)pud & (PAGE_SIZE-1)); - pagetable_dtor(ptdesc); - pagetable_free(ptdesc); + pagetable_dtor_free(virt_to_page(pud)); } #ifndef __HAVE_ARCH_PUD_FREE @@ -245,11 +234,7 @@ static inline p4d_t *p4d_alloc_one_noprof(struct mm_struct *mm, unsigned long ad static inline void __p4d_free(struct mm_struct *mm, p4d_t *p4d) { - struct ptdesc *ptdesc = virt_to_ptdesc(p4d); - - BUG_ON((unsigned long)p4d & (PAGE_SIZE-1)); - pagetable_dtor(ptdesc); - pagetable_free(ptdesc); + pagetable_dtor_free(virt_to_page(p4d)); } #ifndef __HAVE_ARCH_P4D_FREE diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 583e95568f52b..ef25169523602 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -211,10 +211,7 @@ struct mmu_table_batch { #ifndef __HAVE_ARCH_TLB_REMOVE_TABLE static inline void __tlb_remove_table(void *table) { - struct ptdesc *ptdesc = page_ptdesc((struct page *)table); - - pagetable_dtor(ptdesc); - pagetable_free(ptdesc); + pagetable_dtor_free(table); } #endif diff --git a/include/linux/mm.h b/include/linux/mm.h index cad11fa10c192..cd078d51f47c7 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3001,6 +3001,14 @@ static inline void pagetable_dtor(struct ptdesc *ptdesc) lruvec_stat_sub_folio(folio, NR_PAGETABLE); } +static inline void pagetable_dtor_free(void *table) +{ + struct ptdesc *ptdesc = page_ptdesc((struct page *)table); + + pagetable_dtor(ptdesc); + pagetable_free(ptdesc); +} + static inline bool pagetable_pte_ctor(struct ptdesc *ptdesc) { struct folio *folio = ptdesc_folio(ptdesc);